text
stringlengths 41
31.4k
|
|---|
<s>generated summary and ideal summary divided by the number of sentences in the system generated summary. Precision (P) = (A∩B)/A Where `A' denotes that the number of sentences obtained by the summarizer and also `B' denotes the number of relevant sentences compared to target sets. (ii) Recall (R): It is the number of sentences occurring in both systems generated summary and ideal summary divided by the number of sentences in the ideal summary. Recall(R) = (A∩B)/B Where `A' denotes that the number of sentences obtained by the summarizer and also `B' denotes the number of relevant sentences compared to target sets. (iii) F-measure: The integrated measure that incorporated both Precision and Recall is F-measure. F-Score = (2×P×R)/ (P+R) Where `A' denotes that the number of sentences obtained by the summarizer and also `B' denotes the number of relevant sentences compared to target sets. The Evaluation result of fist 10 document has given below in table 1. Table 1. Result of Precision, Recall and F-score Document Precision(P) Recall(R) F-Score 1 0.84 0.71 0.76 2 0.79 0.72 075 3 0.82 0.69 0.74 4 0.82 0.68 0.74 5 0.79 0.71 074 6 0.82 0.73 0.75 7 0.78 0.72 0.73 8 0.85 0.70 0.75 9 0.85 0.71 0.76 10 0.84 0.71 0.76 Average 0.82 0.70 0.74 Score 5 Conclusion We have gone for an Automatic Bengali Document Summarizer using Python as a programming platform. There is enough resources for English to process and obtain summarize documents. But this thing is not directly applicable for Bengali Language as there is lots of complexity in Bengali, which is not same to English in the context of Grammar and sentence structure. Again, doing this for Bengali is harder as there is no established tool to facilitate research work. But this necessary as 26 crore people use this language. So, we have gone for a new approach Bengali document summarization. Here, the system design has been completed by preprocessing the i/p (input) doc, tagging the word, replacing pronoun, sentence ranking respectively. Pronoun replacement has been added here to minimize the rate of swinging pronoun in the output summary. As the pronoun replacement we have gone ranking sentences according to sentence frequency, numerical figures (both in digit and word version), document title. Here if the sentence has any word that exists in title also taken into our account. The similarity between two sentences has been checked to deduct one as that cause‟s less redundancy. The numerical figure also makes an impact so they were also identified. We have taken over 3000 newspaper and books documents words has been trained according to grammar. And two documents has been checked by the design system to evaluate the efficiency of designed summarizer. From the evaluation system it is been found that the Recall, Precision, F-score is 0.70 as it is 70%, 0.82 as it is 82%, 0.74 as it is 74% respectively. References [1] A. Hamou-Lhadj and T. Lethbridge, “Summarizing the content of large traces to facilitate the understanding of the behaviour</s>
|
<s>of a software system,”in Proceedings of the 14th IEEE International Conference on Program Comprehension (ICPC). IEEE, 2006, pp. 181–190. [2] E. Hovy “Automated Text Summarization”. In: Mitkov, R. (Ed.), the Oxford Handbook of Computational Linguistics. Oxford University Press, 2005, chapter 32, pp. 583-598. [3] D. R. Radev, E. Hovy, and K. McKeown, “Introduction to the special issue on summarization,” Journal of Computational Linguistics, vol. 28, no. 4, pp. 399–408, December 2002. [4] K. S. Jones, “Automatic summarizing: factors and directions,” Advances in automatic text summarization, pp. 1–12, 1999. [5] [https://blog.frase.io/ [6] A. Dongmei, Z. Yuchao, and Z. Dezheng, “Automatic text summarization based on latent semantic indexing,” Journal of Artificial Life and Robotics. Springer, vol. 15, no. 1, pp. 25–29, August 2010. [7] M. d. Kunder, “The size of the world wide web,” February 2015, [Online]. Available:http://www.worldwidewebsize.com.[Accessed: 15- February-2015]. [8] R. Ferreira, L. de Souza Cabral, F. Freitas, R. D. Lins, G. de Frana Silva, S. J. Simske, and L. Favaro, “A multi-document summarization system based on statistics and linguistic treatment,” Expert Systems with Applications. Elsevier, vol. 41, no. 13, pp. 5780–5787, October 2014. [9] H. P. Luhn, “The automatic creation of literature abstracts,” IBM Journal of Research and Development, vol. 2, no. 2, pp. 159–165, April 1958. [10] [10] O. M. Foong, A. Oxley, and S. Sulaiman, “Challenges and trends of automatic text summarization,” International Journal of Information and Telecommunication Technology, vol. 1, no. 1, pp. 34–39, 2010. [11] A. M. Azmi and S. Al-Thanyyan, “A text summarizer for arabic,”JournalofComputer Speech and Language, vol. 26, no. 4, pp. 260–273, August 2012. [12] C. Munir, K. Ibrahim, and H. C. Mofazzal, Bangla VasarByakaran. Ideal publication, Dhaka, November 2000. [13] M. A. Karim, M. Kaykobad, and M. Murshed, “Technical Challenges andDesign Issues in Bangla Language Processing”. Published in the United States of America by Information Science Reference (an imprint of IGI Global), June 2013 [14] M. T. Islam and S. Masum, “Bhasa: A corpus based information retrieval and summarizer for bengali text,” in Proceedings of the 7th International Conference on Computer and Information Technology, December 2004. [15] M. N. Uddin and S. A. Khan, “A study on text summarization techniques and implement few of them for bangla language,” in Proceedings of the 10th International Conference on Computer and Information Technology (ICCIT-2012). IEEE, 2007, pp. 1–4. [16] K. Sarkar, “Bengali text summarization by sentence extraction,” in Proceedings of International Conference on Business and Information Management (ICBIM-2012), NIT Durgapur, 2012, pp. 233–245. [17] R. Chakma et al., "Navigation and Tracking of AGV in ware house via Wireless Sensor Network," 2019 IEEE 3rd International Electrical and Energy Conference (CIEEC), Beijing, China, 2019, pp. 1686-1690, doi: 10.1109/CIEEC47146.2019.CIEEC-2019589. [18] Ismail Siddiqi Emon, Sabiha Sunjida Ahmed, Sharmin Akter Milu, and S. S. Mahtab. 2019. Sentiment analysis of bengali online reviews written with english letter using machine learning approaches. In Proceedings of the 6th International Conference on Networking, Systems and Security (NSysS ‟19). Association for Computing Machinery, New York, NY, USA, 109–115. DOI:https://doi.org/10.1145/3362966.3362977 [19] Ahmed S.S. et al. (2020)</s>
|
<s>Opinion Mining of Bengali Review Written with English Character Using Machine Learning Approaches. In: Bindhu V., Chen J., Tavares J. (eds) International Conference on Communication, Computing and Electronics Systems. Lecture Notes in Electrical Engineering, vol 637. Springer, Singapore. https://doi.org/10.1007/978-981-15-2612-1_5 [20] Milu S.A. et al. (2020) Sentiment Analysis of Bengali Reviews for Data and Knowledge Engineering: A Bengali Language Processing Approach. In: Bindhu V., Chen J., Tavares J. (eds) International Conference on Communication, Computing and Electronics Systems. Lecture Notes in Electrical Engineering, vol 637. Springer, Singapore. https://doi.org/10.1007/978-981-15-2612-1_8 View publication statsView publication statshttps://doi.org/10.1007/978-981-15-2612-1_5https://www.researchgate.net/publication/344337986</s>
|
<s>sv-lncsSee discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/321170679Sentence Similarity Estimation for Text Summarization using Deep LearningConference Paper · December 2017CITATIONSREADS4333 authors:Some of the authors of this publication are also working on these related projects:PhD Research View projectSpeech Signal Processing View projectSheikh AbujarDaffodil International University52 PUBLICATIONS 83 CITATIONS SEE PROFILEMahmudul HasanSaitama University33 PUBLICATIONS 71 CITATIONS SEE PROFILESyed Akhter HossainDaffodil International University99 PUBLICATIONS 476 CITATIONS SEE PROFILEAll content following this page was uploaded by Mahmudul Hasan on 06 February 2019.The user has requested enhancement of the downloaded file.https://www.researchgate.net/publication/321170679_Sentence_Similarity_Estimation_for_Text_Summarization_using_Deep_Learning?enrichId=rgreq-632199e509d3629f72443cc31e1ac926-XXX&enrichSource=Y292ZXJQYWdlOzMyMTE3MDY3OTtBUzo3MjMxNTA1ODEwMTA0MzVAMTU0OTQyMzk0Mjg2Nw%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/321170679_Sentence_Similarity_Estimation_for_Text_Summarization_using_Deep_Learning?enrichId=rgreq-632199e509d3629f72443cc31e1ac926-XXX&enrichSource=Y292ZXJQYWdlOzMyMTE3MDY3OTtBUzo3MjMxNTA1ODEwMTA0MzVAMTU0OTQyMzk0Mjg2Nw%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/project/PhD-Research-509?enrichId=rgreq-632199e509d3629f72443cc31e1ac926-XXX&enrichSource=Y292ZXJQYWdlOzMyMTE3MDY3OTtBUzo3MjMxNTA1ODEwMTA0MzVAMTU0OTQyMzk0Mjg2Nw%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Speech-Signal-Processing-12?enrichId=rgreq-632199e509d3629f72443cc31e1ac926-XXX&enrichSource=Y292ZXJQYWdlOzMyMTE3MDY3OTtBUzo3MjMxNTA1ODEwMTA0MzVAMTU0OTQyMzk0Mjg2Nw%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-632199e509d3629f72443cc31e1ac926-XXX&enrichSource=Y292ZXJQYWdlOzMyMTE3MDY3OTtBUzo3MjMxNTA1ODEwMTA0MzVAMTU0OTQyMzk0Mjg2Nw%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Sheikh_Abujar?enrichId=rgreq-632199e509d3629f72443cc31e1ac926-XXX&enrichSource=Y292ZXJQYWdlOzMyMTE3MDY3OTtBUzo3MjMxNTA1ODEwMTA0MzVAMTU0OTQyMzk0Mjg2Nw%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Sheikh_Abujar?enrichId=rgreq-632199e509d3629f72443cc31e1ac926-XXX&enrichSource=Y292ZXJQYWdlOzMyMTE3MDY3OTtBUzo3MjMxNTA1ODEwMTA0MzVAMTU0OTQyMzk0Mjg2Nw%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Daffodil_International_University?enrichId=rgreq-632199e509d3629f72443cc31e1ac926-XXX&enrichSource=Y292ZXJQYWdlOzMyMTE3MDY3OTtBUzo3MjMxNTA1ODEwMTA0MzVAMTU0OTQyMzk0Mjg2Nw%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Sheikh_Abujar?enrichId=rgreq-632199e509d3629f72443cc31e1ac926-XXX&enrichSource=Y292ZXJQYWdlOzMyMTE3MDY3OTtBUzo3MjMxNTA1ODEwMTA0MzVAMTU0OTQyMzk0Mjg2Nw%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Mahmudul_Hasan20?enrichId=rgreq-632199e509d3629f72443cc31e1ac926-XXX&enrichSource=Y292ZXJQYWdlOzMyMTE3MDY3OTtBUzo3MjMxNTA1ODEwMTA0MzVAMTU0OTQyMzk0Mjg2Nw%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Mahmudul_Hasan20?enrichId=rgreq-632199e509d3629f72443cc31e1ac926-XXX&enrichSource=Y292ZXJQYWdlOzMyMTE3MDY3OTtBUzo3MjMxNTA1ODEwMTA0MzVAMTU0OTQyMzk0Mjg2Nw%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Saitama_University?enrichId=rgreq-632199e509d3629f72443cc31e1ac926-XXX&enrichSource=Y292ZXJQYWdlOzMyMTE3MDY3OTtBUzo3MjMxNTA1ODEwMTA0MzVAMTU0OTQyMzk0Mjg2Nw%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Mahmudul_Hasan20?enrichId=rgreq-632199e509d3629f72443cc31e1ac926-XXX&enrichSource=Y292ZXJQYWdlOzMyMTE3MDY3OTtBUzo3MjMxNTA1ODEwMTA0MzVAMTU0OTQyMzk0Mjg2Nw%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Syed_Hossain5?enrichId=rgreq-632199e509d3629f72443cc31e1ac926-XXX&enrichSource=Y292ZXJQYWdlOzMyMTE3MDY3OTtBUzo3MjMxNTA1ODEwMTA0MzVAMTU0OTQyMzk0Mjg2Nw%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Syed_Hossain5?enrichId=rgreq-632199e509d3629f72443cc31e1ac926-XXX&enrichSource=Y292ZXJQYWdlOzMyMTE3MDY3OTtBUzo3MjMxNTA1ODEwMTA0MzVAMTU0OTQyMzk0Mjg2Nw%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Daffodil_International_University?enrichId=rgreq-632199e509d3629f72443cc31e1ac926-XXX&enrichSource=Y292ZXJQYWdlOzMyMTE3MDY3OTtBUzo3MjMxNTA1ODEwMTA0MzVAMTU0OTQyMzk0Mjg2Nw%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Syed_Hossain5?enrichId=rgreq-632199e509d3629f72443cc31e1ac926-XXX&enrichSource=Y292ZXJQYWdlOzMyMTE3MDY3OTtBUzo3MjMxNTA1ODEwMTA0MzVAMTU0OTQyMzk0Mjg2Nw%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Mahmudul_Hasan20?enrichId=rgreq-632199e509d3629f72443cc31e1ac926-XXX&enrichSource=Y292ZXJQYWdlOzMyMTE3MDY3OTtBUzo3MjMxNTA1ODEwMTA0MzVAMTU0OTQyMzk0Mjg2Nw%3D%3D&el=1_x_10&_esc=publicationCoverPdfSentence Similarity Estimation for Text Summarization Using Deep Learning. Sheikh Abujar1, Mahmudul Hasan 2, Syed AkhterHossain3 1Daffodil International University, Dhanomondi, Dhaka, Bangladesh sheikh.cse@diu.edu.bd 2Comilla University, Comilla, Bangladesh mhasanraju@gmail.com 3 Daffodil International University,Dhanomondi, Dhaka, Bangladesh aktarhossain@daffodilvarsity.edu.bd Abstract. One of the key challenges of Natural Language Processing (NLP) is to identify the meaning of any text. Text Summarization is one of the most challenging applications in the field of NLP where appropriate analysis is needed of given input text. Identifying the degree of relationship among input sentences will help to reduce the inclusion of insignificant sentences in summarized text. Result of summarized text always may notidentify by optimal functions, rather a better summarized result could be found by measuring sentence similarities. The current sentence similarity measuring methods only find out the similarity between words and sentences. These methods state only syntactic information of every sentence. There are two major problems to identify similarities between sentences. These problems were never addressed by previous strategies: provide the ultimate meaning of the sentence and added the word order, approximately. In this paper, the main objective was tried to measure sentence similarities, which will help to summarize text of any Language, but we considered English and Bengali here. Our proposed methods were extensively tested by using several English and Bengali Texts, collected from several online news portals, blogs, etc. In all cases, the proposed sentence similarity measures mentioned here was proven effective and satisfactory. Keywords: Sentence Similarity, Lexical Analysis, Semantic Analysis, Text Summarization, Bengali Summarization, Deep Learning. 1 Introduction Text Summarization is a tool that attempts to provide a gist or summary of any given text automatically. It helps to understand any large document in a very short time, by getting the main idea and/or information of entire text from a summarized text. To produce the proper summarization, there are several steps to follow, i.e. Lexical Analysis, Semantic analysis and Syntactic analysis. Possible methods and research findings regarding sentence similarity is stated in this paper. Bengali language has very different sentence structure and analyzing those Bengali alphabets may found difficult in various programming platforms. The best way of preprocessing both Bengali and English sentences before deep analysis, is using Unicode [2]. Sentence could be identified in a standard form, it will help to identify sentence or words structure as needed. The degree of measuring sentence similarity is being measured by method of identifying sentence similarity as well as large and short text similarity. Sentence similarity measures should state</s>
|
<s>information like: if two or more sentences are either fully matched in lexical form or in semantic form, sentence could be matched partially or we could found any leading sentence. Identifying centroid sentence is one of the major tasks to accomplish [1]. Few sentences can contain some major or important words which may not be identified by words frequency. So, only depending on word frequency may not always provide the expected output, though several times most frequent words may relate with the topic models. Meaningfully same but structurally different sentences have to avoid while preparing a better text summarizer [3]. But related or supporting sentences may add a value to the leading sentences [4]. Finally most leading sentence and relationship between sentences could be determined. In this paper, we have discussed several important factors regarding assessing Sentence and text similarity. Major findings are mentioned in details and more importantly a potential deep learning methods and model were stated here. Several experimental results were stated and explained with necessary measures. 2 Literature Review The basic feature of text summarization would be either abstractive or extractive, approach. Extractive method applies several manipulation rules over word, sentence or paragraph. Based on weighted values or other measures, extractive approach choose appropriate sentence. Abstractive summarization requires several weights like, sentence fusion, constriction and basic reformulation (Mani &Maybury, 1999; Wan, 2008)[5]. Oliva et al. (2011) introduced a model SyMSS[6], which measure sentence similarity by assessing, how two different sentences systaltic structure influence each other. Syntactic dependence tree help to identify the rooted sentence, as well as the similar sentence. This methods state that, every word in a sentence has some syntactic connections and this will create a meaning of every sentence. The combination of LSA (Deerwester et al., 1990)[7] and WordNet (Miller, 1995)[9] to access the sentence similarity in between every words were proposed in Han et al.(2013)[8]. They have proposed two different methods to measure sentence similarity. First one makes a group of words – known as the align-and-penalize approach and the Second one is known as SVM approach, where the method applies different similarity measures using n-gram and by using Support Vector Regression (SVR), they use LIBSVM (chang and Lin, 2011)[10], as another similarity measure. A threshold based model always returns the similarity value between 0 and 1. Mihalcea et al. (Mihalcea et al., 2006)[11] represents all sentences as a list of bag of words vector and they consider first sentence as a main sentence. To identify word-to-word similarity measure, they have used highest semantic similarity measures in between main sentence and next sentence. The process will continue repeated times until the second main sentence could be found, during this process period. Das and Smith (Das and Smith, 2009)[12] introduced a probabilistic model which states Syntax and semantic based analysis. Heilman and Smith (Heilman and Smith, 2010)[13] introduces as new method of editing tree, which will contain syntactic relations between input sentences. It will identify paraphrases also. To identify sentence based dissimilarity, a supervised</s>
|
<s>two phase framework has been represented using semantic triples (Qiu et al., 2006)[14]. Support Vector Machine (SVM) can combine distributional, shallow textual [15]-[17] and knowledge based models using support vector regression model. 3 Proposed Method This Section represents a new proposed sentence similarity measuring model for English and Bengali language. The assessing methods, sentence representation and degree of sentence similarity have been explained in detail. The necessary steps required specially for Bangla language, has been considered while developing the proposed model. This model will work for measuring English and Bengali sentence similarity. The sentence structure and lexical form are very different for Bangla language. The semantic and syntactic measures also can add more values in this regards. The concept of working with all those necessary steps will help to produce better output, in every aspect. In this research - lexical methods have been applied, and untimely a perfect expected result has been found. A. Lexical Layer Analysis: The lexical layer has few major functions to perform, such as: Lexical representation and Lexical similarity. Both of these layers have several other states to perform. The Fig. 1 is the proposed model for lexical layer. Fig. 1. Lexical Layer analysis model. Sentence 1 Sentence 2 Sentence-Sentence Similarity Order Vector WordNet Database Order Similarity Sentence Similarity Word-word Similarity Token Figure 1 introduces the sentence similarity measures for lexical analysis. Different sentences will be added into a token. A word-to-word and sentence-to-sentence analyzer will perform together. An order vector will add all those word and/or sentence order in a sequence based on similarity measures. With the reference of weighted sum, the order of words and sentence will be privileged. A WordNet database will send lexical resources to word-to-word and sentence-to -sentence processes. Ultimately based on the order preference, the values from three different states (Word-word Similarity, Sentence-Sentence Similarity and Order Similarity) will generate the similar sentence output. The methods was followed by one of the popular deep learning algorithm – Text Rank. 1. Lexical Analysis: This Sate splits sentence and words into different tokens for further processing. 2. Stop Words Removal: Several value holds representative information. Such as article, pronoun, etc. These types of words could be removed while considering text analysis. 3. Lemmatization: This is a step to convert and/or translates each and every token into a basic form, exactly from where it belongs to. The very same verb form in the initial form. 4. Stemming: Stemming is the state of word analysis. Word-word and sentence-to-sentence both methods need all their contents (text/word) in a unique form. Here every word will be treated as a rooted word. Such as : play, player – both words are different as word, though in deep meaning those words could be considered as a branch words of the word “Play”. By using a stemmer, we could have found all those text in a unique form before further processing. The confusion of getting different words in structure but same in inner meaning will reduce. So, it is a</s>
|
<s>very basic part of text preprocessing modules. Fig.2. Lexical Layer processing of input sentences. Figure 2. states, how lexical steps had been processed, with appropriate example. All the necessary processes as: Lexical analysis, stop words removal and stemming had been done as per the mentioned process. Those sentences will be used for further experiments in this paper. Input Sentence 1 The growing needs are far outpacing resources Lexical Analysis the ;growing; needs; are; far; outpacing; resources Stop Words Removal growing; needs; far; outpacing; resources Stemming grow; need; far; outpace; resource Input Sentence 2 The growing needs are beyond outpacing resources Lexical Analysis the ;growing; needs; are; beyond; outpacing; resources Stop Words Removal growing; needs; beyond; outpacing; resources Stemming grow; need; beyond; outpace; resource B. Sentence similarity: Path measure helps to sense the relatedness of words from the hierarchies of WordNet. It calculates and replies the path distance between two words. Path measure will be used to identify similarity scores between two words. Path measure could be calculated through Eq. (1). Path_measure(token1,token2)=1/Path_length(token1, token2) (1) Path_measure will send two different tokens as: token1 and token2. Both tokens are assigned the value of a single sentence after splitting. Path_length will return the distance of two different concepts from WordNet. Levenshtein Distance (Lev.) algorithm has been used to identify the similarity matrix in between of words. To identify the sentence similarity, measuring words similarity pays more importance. Lev. counts the minimum number of similarity required for the operation of insertion, deletion and modification of every character which may require transforming from a sentence to another sentence. Here it was used to identify distance and/or similarity measure between words. LCS (Long common subsequences) has also implemented though expected output was found using Lev. Here LCS does not allow substitutions. The distance of sentences followed by Lev. will be calculated based on the Eq. (2). LevSim=1.0-(Lev.Distance(W1,W2)/maxLength(W1,W2)) (2) The degree of relationship helps to produce a better text summarizer by analyzing text similarity. The degree of measurement could be word-word, word – sentence, sentence – word and sentence – sentence. In this research, we had discussed the similarity between two different words. Such as there are a set of Words (after splitting every individual sentence): W= {W1,W2,W3,W4,…..Wn). Lev.Distance calculate the distance between two words: W1 and W2, and maxLength will reply the score of maximum character found in between W1 and W2. Only similarity will be checked between two different words. The similarity between words could be measured by algorithm 1. Algorithm 1. Similarity between Words 1: W1= Sentence1.Split(“ ”) 2: W2= Sentence2.Split(“ ”) 3: if Path_measure(W1,W2) < 0.1 then 4: W_similarity= LevSim(W1,W2) 5: else 6: W_similarity = Path_measure(W1,W2) 7: end if In Algorithm 1. the value of path will be dependent of distance values and LevSimilairty (LevSim) value could be found from Eq. 1. The words similarity score less than 0.1 will be calculated through the LevSim method else the score will be accepted form the path measure algorithm. W_similarity will receive similarity score of between</s>
|
<s>two words. The range of maximum and minimum score is in between {0.00 -1.00}. Table 1. represents the similarity value of words from sentence 1. Wu and Palmer measure (WP) use the WordNet taxonomy to identify the global depth measures (relatedness) of two similar or different concepts or words by measuring edge distance as well as will calculate the depth of LCS (Least–Common–Subsumer) value of those two inputs. Based on Eq. (3), WP will return a relatedness score if any relation and/or path exist in between on those two words, else if no path exist – it will return a negative number. If the two inputs are similar then the output from synset will only be 1. WP_Score = 2*Depth(LCS) / (depth(t1) + depth(t2)) (3) In Equation 3, t1 and t2 are token of Sentence 1 and sentence 2.Table. (2) States the WP similarity values of given input (as mentioned in Fig. 2). Lin measure (Lin.) will calculate the relativeness of words or concepts based on information content. Only due to lack of information or data, output could become zero. Ideally the value of Lin would be zero when the synset value is the rooted node. But if the frequency of the synset is zero then the result will also be zero but the reason will be considered as, lack of information or data. The Eq. (4) will be used to measure the Lin. Value and table 3 will state the output values after implementing the input sentences on Lin. Measures. Lin_Score = 2 * IC(LCS) / (IC(t1) + IC(t2)) (4) In equation 4, IC is the information content. A new similarity measure algorithm was experimented where all those mentioned algorithm and/or methods will be used. Equation (5) states the new similarity measure process. total_Sim(t1,t2) = (Lev_Sim (t1,t2)+WP_Score(t1,t2)+Lin_Score(t1,t2))/3 (5) In Equation 5, a new total similarity values will be generated based on all mentioned lexical and semantic analysis. Edge distance, global depth measure and analysis of information content is very much essential. In that purpose, this method has applied and experimented out is shown in table (4). Algorithm 2. A proposed similarity algorithm 1: matrix = newmatrix(size(X)*size(Y)) 2: total_sim = 0 3: i=0 4: j = 0 5: for i∈ A do 6: for j ∈ B do 7: matrix(i, j) = similarity_token(t1,t2) 8: end for 9: end for 10: for has_line(matrix) and has_column(matrix) do 11: total_Sim= (Lev_Sim(matrix)+WP_Score(matrix)+Lin_Score(matrix))/3 12: end for 13: return total_Sim The Algorthm-2 receives the token on two different X,Y as input text. Then it will create a matrix representation of m*n dimensions. Variable total_sim (total similarity) and i, j (which are the values for iteration purpose) will initially become 0. Initially, matrix(i,j) will generate the token matrix, where values will be added. The variable total_sim will record and update calculate the similarity of pair of sentences based on token matrix – matrix(i,j). 4 Experimental results and discussion Several English and Bengali texts were tested though the proposed lexical layer to find out the sentence similarity measure.</s>
|
<s>Texts are being collected from online resource, for example: wwwo.prothom-alo.com, bdnews24.com, etc. Our python web crawler initially saved all those web (html content) data into notepad file. We have used Python – Natural Language Tool Kit (NLTK: Version– 3) and WS4J (a java API, specially developed for WordNet use). All theexperimented results are stated in this section, below. Table.1: Similarity score between words using path measure and LevSim. grow need far outpace resource grow 1.00 0.25 0.00 0.14 0.00 need 0.25 1.00 0.11 0.17 0.14 far 0.00 0.11 1.00 0.00 0.09 outpace 0.14 0.17 0.00 1.00 0.00 resource 0.00 0.14 0.09 0.00 1.00 Table.2: Similarity score between words using Wu and Palmer measure (WP) grow need far outpace resource grow 1.00 0.40 0.00 0.25 0.00 need 0.40 1.00 0.43 0.29 0.57 far 0.00 0.43 1.00 0.00 0.38 outpace 0.25 0.29 0.00 1.00 0.00 resource 0.00 0.57 0.38 0.00 1.00 Table.3: Similarity score between words using Lin measure (Lin.) grow need far outpace resource grow 1.00 0.40 0.00 0.25 0.00 need 0.40 1.00 0.43 0.29 0.57 far 0.00 0.43 1.00 0.00 0.38 outpace 0.25 0.29 0.00 1.00 0.00 resource 0.00 0.57 0.38 0.00 1.00 Table 1, table 2 and Table 3, states the experimented result of similarity measure by using path measure and LevSim, Wu and Palmer measure (WP) and Lin measure (Lin.) consecutively. All those methods are either applied in lexical analysis or semantic analysis. In this research article, the proposed method of identifying sentence similarity using a hybrid model is being stated in Table 4. Table.4: New Similarity score grow need far outpace resource grow 1.00 0.21 0.00 0.13 0.00 need 0.21 1.00 0.18 0.15 0.34 far 0.00 0.18 1.00 0.00 0.15 outpace 0.13 0.15 0.00 1.00 0.00 resource 0.00 0.34 0.15 0.00 1.00 This method was also applied in Bengali language using Bengali WordNet. Experimented results are shown in Table (5). Table.5: New Similarity score (Applied in Bengali Sentence). ট্রেন সিট ভাড়া গন্তব্য ট্রেন 1 0.78 0.88 0.16 সিট 0.78 1 0.31 0.24 ভাড়া 0.88 0.31 1 0.23 গন্তব্য 0.16 0.24 0.23 1 5 Conclusion and lines for further work This paper has presented sentence similarity measure usinglexical and semantic similarity. Degree of similarity were mentioned and implemented in the proposed method. There are few resources available for Bengali language. More development on Bengali language is just more than essential. Bengali WordNet is not stable as like other WordNet available for English language. This research found suitable output in the unsupervised approach though a huge dataset will be required to implement the supervised learning methods. There are other sentence similarity measures, could be done by more semantic analysis and syntactic analysis. Both of these analysis if could be done together including lexical similarities, a better result could be found. More importantly, for a better text summarizer, we need to identify the leading sentences. Centroid sentences could optimize the analysis of post processing of text summarization. Evaluating system developed summarizer before publishing as final form is more important. Backtracking</s>
|
<s>methods could possibly be a good solution in his regards. 6 Acknowledgment We would like to thanks, Department of Science and Engineering of two universities: Daffodil International University and Comilla University, Bangladesh for facilitating such joint research. References [1] Rafael Ferreira et al. “Assessing Sentence Scoring Techniques for Extractive Text Summarization”, Elsevier Ltd., Expert Systems with Applications 40 (2013) 5755-5764. [2] Sheikh Abujar, Mahmudul Hasan, “A Comprehensive Text Analysis for Bengali TTS using Unicode” 5th IEEE International Conference on Informatics, Electronics and Vision (ICIEV), 13-14 May 2016, Dhaka, Bangladesh. [3] Sheikh Abujar, Mahmudul Hasan, M.S.I Shahin, Sayed Akter Hossain “A Heuristic Approach of Text Summarization for Bengali Documentation” 8th IEEE ICCCNT 2017, July 3 -5, 2017, IIT Delhi, Delhi, India. [4] Lee, Ming Che. "A novel sentence similarity measure for semantic-based expert systems." Expert Systems with Applications 38.5 (2011): 6392-6399. [5] Mani, Inderjeet, and Mark T. Maybury, eds. Advances in automatic text summarization. Vol. 293. Cambridge, MA: MIT press, 1999. [6] Oliva, Jesús, et al. "SyMSS: A syntax-based measure for short-text semantic similarity." Data & Knowledge Engineering 70.4 (2011): 390-405. [7] Deerwester, S., Dumais, S.T., Furnas, G.W., Landauer, T.K., Harshman, R., 1990. Indexing by latent semantic analysis. J. Am. Soc. Inf. Sci. 41 (6), 391–407. [8] Han, L., Kashyap, A.L., Finin, T., Mayfield, J., Weese, J., 2013. UMBC EBIQUITY-CORE: semantic textual similarity systems. Volume 1, Semantic Textual Similarity, Association for Computational Linguistics, Atlanta, Georgia, USA, June, pp. 44–52. [9] Miller, G.A., 1995. Wordnet: a lexical database for English. Commun. ACM 38, 39–41. [10] Chang, C.-C., Lin, C.-J., 2011. LIBSVM: a library for support vector machines. ACM Trans. Intell. Syst. Technol. 2 (May (3)), 27, 1–27. [11] Mihalcea, R., Corley, C., Strapparava, C., 2006. Corpus-based and knowledge-based measures of text semantic similarity, National Conference on Artificial Intelligence - Volume 1. AAAI Press, Boston, Massachusetts, pp. 775–780. [12] Heilman, M., Smith, N.A., 2010. Tree edits models for recognizing textual entailments, paraphrases, and answers to questions, Annual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, Stroudsburg, PA, USA, pp. 1011–1019, 2010. [13] Heilman, M., Smith, N.A., 2010. Tree edit models for recognizing textual entailments, paraphrases, and answers to questions. Human Language Technologies, Stroudsburg, PA, USA, pp. 1011–1019, 2010. [14] Qiu, L., Kan, M.-Y., Chua, T.-S., 2006. Paraphrase recognition via dissimilarity significance classification, EMNLP. Association for Computational Linguistics, Stroudsburg, PA, USA, pp. 18–26. [15] Dzikovska, Myroslava O., et al. “Intelligent tutoring with natural language support in the Beetle II system.” Sustaining TEL: From Innovation to Learning and Practice. Springer Berlin Heidelberg, 2010.620-625. [16] Jurgens, David, Mohammad TaherPilehvar, and Roberto Navigli. ”SemEval-2014 Task 3: Cross-level semantic similarity.”SemEval 2014 (2014): 17. [17] Mikolov, Tomas, et al. ”Extensions of recurrent neural network language model.” Acoustics, Speech and Signal Processing (ICASSP), IEEE, 2011. View publication statsView publication statshttps://www.researchgate.net/publication/321170679</s>
|
<s>Paper Title (use style: paper title)������� ��� ������������ ����� �������� ����� ������� ��� ��������� ������ ������� �� �� ������ Department of Computer Science and Engineering BRAC University !����� ���������� �� ��"#�����$��� ����� ��%��� Department of Computer Science and Engineering BRAC University !����� ���������� �������%���&&'#�����$��� ����� �� ��%�( Department of Computer Science and Engineering BRAC University !����� ���������� %����&))'**&#�����$��� +������ ���% Department of Computer Science and Engineering BRAC University !����� ���������� �������$���%#,����$��$,� ���������� ��� Department of Computer Science and Engineering BRAC University !����� ���������� ���������'-)#�����$��� Abstract���� ��� � ��� ��� � � ����� ��������� ���� �������� ��������������� ���� ���������������������� �������� �� � ���� �� ������ ������ �� ������� ����� ���������������������� ������������� ������������������������������� ����� ������� ���� ����������������������������������������������������������� ������ � ������������� ��������� ������!����������� ��� ���� "��� �� �������� !��� ��� � ��� #����� ����� ������������� � � ���������� $������������ ��� ���� ���������#������ ��� �� ������������� � � ������ ����� !�� ����� ��%������� ����� ��� ����� �� ���� ��� ����� �� ������ ��� �����������#������ ���� ����� ���� ������������������� �&���� �����������������!������������ ���! ���������������������������� ��� � ������� � � �� ����������� ����� ������������� ��� ���������#�������� ��� ��� ����$�����'(���� ��&���)��������*���������+��������+������������������ ��,������������� �� ������������ ������ ������ ���� ����������������� � �������� ���� ����� $����� '(���� � ���� ������� � ���� ������ ������� ������������������������� ��������������� ������� Keywords— Text Summarization, Bengali, FCM, TextRank, ROUGE, Extractive Text Summarization .$ ./�0!1�.0/.� �� ��� 2���� � ������ ����� �� ,� �3����� � ��� ������� ,�� �� ��� ��� ��� ���(���� ���� �� ���� ��� ���� ��%�����$ � ��� 2���� ����� ��������� ��������� ���� ������������� �� ���3 ���� ���� ��%����� 2��� ��� ����� ���� ��������$ ��� ������������� ��� ���� ,� ���� �� ���� ������� ����� ������� ��������� ��� ���� �������� 2���� ��� �� ���� �% �� ����� ����������� ������$ 4��� ��� �� �������� �� ������������ �� ����������� ��� �������� ������� �� ������������ ,���� ���� �� ������ ��� ������ 3���%����� ����� ��� ���� �% ������� ��� ������������$ +��� �� ���� 3��3���� ������� ��� �,5���� � �� �� ���� �� �������� 2������ �� ������� ��� ��� ��� ���� ���� � ������� ������� 3����� ��� ��� ���� ������� �% ��� �������$ ��� ������������ �� 3�������� �� ���� ���� �2� ��5�� ��������6 7������� � ��� �,������� �$ .� ��� 7������� � �33������ ��� ������ ���3�� ����� ��� ��������� ���� 3������ ��� ����� 2����� �� ��� ���� ������� �% ��� �� �� ����� ��� �������� � ������� ��� ���� 3������ ������ �% ��� 3������ 8'� -9$ 4������� �,������� � ������������� �� ������,�� �� 8-&9 ��� �� ��� ���������� ������� �� :����������; ��� ���� ��� ���� �������� � ���������� ���� ���� ��� %�2�� 2���� ���� ��� ��������$ !�� �� ����� ���� ���3����������� ���3��� ���3�� �,������� � ������������� �� ���� ���� ��%%�����$ �� ������� ,���� 2��� ,� ���� ���� � ��� 2������ ,� � ����� 8)9$ �� ���� ��� 3����� �% � ���� ��� ������%��� ��� ���� �����������,�� ��������� ��� ����������� �� � ������� ������$ ��� 3�3�� ������ ��������� ��� �������� � �33����� 2���� ��� ,��� 2����� ���� � �� ��� ����� %�� ������������� 3��3���$ /�2� �������� �� � ,��� �������� %��</s>
|
<s>���� ��� ������ 2���� 2��� ��������� %��� ��� �������� ����� :�� !���� <������ ���;� ��� ������ ���� 3�������� ��� ���� ,�%��� �� ��� ,� ����������= �� 3��3���������� ��� ������ ��������� ��� �������� ��� ���� �� ��� ���32���� %��� ��� ������� �� ���� ���� �� � �� ��%������ �� ��� ������� ����������$ �%��� ��� ���� �� �% ���3 2����� ��� ������ ����� ��� 2���� �� ����� ���� %����� �� ���� ��� ��� 2���� ��������� %��� � ������ ���� �� ���������� �� � ������ ����$ �� ������ 3�������� %������ �� ����� ������� ���������� ��������� 8>9 �� �������� �� �3����� �������$ ����� 2��� ���� ������� 8?9 ��� ��������� ������� ������ 8-� &� @� A9 ��� ���� ,��� ��3�������� �� 3�� ��� � ���3����� � ����� �� ��� ���$ ��� � ���%��� ��� �������� � �������� %�� ��� ���3����� � ������ ��� ������ ���� ��� �01B7 8"9 ������� ������ ��� ����� ���������� ��������� �� 3�� ��� �� �����������,�� ���������� � ���3����� � �����$ �� ���� �% ��� 3�3�� �� ��������� �� %����26 ������� .. ������,�� ��� ,��������� ����� ���� %�� ��� 3��5���� ������� ... ����������� ��� 3��3���� ����� ����� 2��� � ,���� �������� ������� .C ������,�� ��� ���3������ ��� ������ ��������� ������ ������� C ��������� ��� 3�3��$ ..$ ���DB�01/! 1!E���������������� ���� ������������� 2�� %���� ���������� ,� F��� 8'*9 �� '"?A 2���� �� 3��3���� ��� ���� �% ����������� ��� 2��� %��(������� �� ��������� ��� ����� ����� ����� �� ����� ���������� ���������� ��������� ��� ������� ������ ��������� %�� ��� �������$ .� ������ ����� ����� �� � ,��� �������� �33������� ��2��� ��������� ���� �������������$ ��� �% ��� �33������� �������� �,������ ������������� ������(��� 8)9 ��� �������� � ������������� ������(��� 8'� -9$ !��3��� ��� %��� ���� ������� ���� ������������� �� ��� �� 2����� 3�3���� �� 7������ ��� �������������� �� ��� ��� �� ��3�� ��3������� %�� ,���� �� �������� %���� �% �������� �� ������ �������$ ����� 2��� �� ��� ������� ���� ������������� %���� 2�� ���� ,� .���� �� ��$ �� 8''9 �� ��� ���� -**>$ ��� 3��3���� � ���2��� �������,���� ������(�� %�� �����3�� ��������� 2���� ����� ���3���,���� ������ ������ �������� ��� ���2��� �� �����3�� ��������� ��� ���� ����� � ������� �% ��� ���� ��� ���������$ F���� ��� �� 2�� %����2�� ,� 1���� ��� D��� 8'-9 2�� ��3�������� � ���������� 2���� ���� ���� �������� ������� ��� ������� ������ ��� ���� �������� ������������������ ������������������������� �!��"#$�$"�#�$���"$� �%��!�&'�����($%��))�%��(����� �� ���� ��� ���������$ 7%�� �� ��$ �� 8')9 ��� � ������� �������� ����� 2��� %��(������ ��� 2����� �������� 3��������� ���� ��� �������� �% ��� �������� %�� �������� ������� 3��3����$ ������������ !�� ��� ������3������ �� 8'>9 2���� ���� ���� � ��3�� ,���� �3����� ������������� ������$ ���� ������ ���� �2� �����6 'G ������� ��� ����� �% ��� ��������$ -G ������� ��� ������� �% ��� ��������$ .� ���� ��� %���� 3���,� %������ ��� ��� ��������� ��%�������� �� � �������� ,�%����2��� � ��3������������ ����� 2���� ���� ��������������� ���� �� D������ ��� ���� � ����� ���������� ���3�������(�� %�� %������ ��� ��� �������� �� �� �������$������� 2��� 2�� ���� ,� ����� �� 8'?9 2�� ���� ���� ����H.!� ����� ����� 2��� 3���������</s>
|
<s>���� ��� �������������� �� �������� � ������� �% � ������ ��������$ +� �������� � ������ ��%������ ������� ��������� %��� F7�!,������� %�� � �������� 2���� ���������� ��� �������� �% �������������$ ����� �� ��$ �� ����� 3�3�� 8'&9 ���� � ��%%������33����� �� ��������� ��������� %�� ���������� � �������$��� ���� D������ ���������� �%��� �������� ������� �������� ��� ,��� ��� 2���� ����������� %�� ���������� ����������$ ��� ����� � ��%%����� ����� �� ��������� ������������ %�� ���������� ��� ������� �� 2���� ��������������� 2��� ��� ���� �� ���������� ��������� ,�%��� %�������� ���� �������������$0�� �% ��� ����%�� �% ,�������� ������,������� ���������� ���� �� � ,��� ���� �� ������ ��� ������������ ���� �� � ���� ��� �����3�� �% ����� �������$ +�2� ��� ��3����������� �% ��� ��� ���������� �� � ,��� �� ����� %�� 7������ �������� 3��������� %�� (���� ���� ���� ��2$ .� 8'@9 <���� ��$ ��$ �� � 3��3���� � ���� ����� 2���� �� ,���� ����� ����� ������� ���������$ �%��� ��� �������� ����������� ����� ,��� ���� ���� ���������� ���� ����� ��� ����� ���������������� ���3��%���� ��� ����������� D������ ���������$ ��� ��������� 2�� �� ���3�� ,� !��� 8>9 �� '"@) ��� 2������� ����%��� ,� ������ 8'A9 �� '"A'$������� 3������ �% ���������� ���� ������������� �� ��� ������ ����3��� ������� ���������$ ������� 8?9 �� � ���3� ,���� ����3�� ���� ��������� ���� �� %��� ��� <������� ��������� 8'"9$ ��%��� ��� ������������� ������(��� ��� ,� �33���� �� ��� ���� �% ����� � 3��3�� 3��3��������� �� ��(������ ������� 3��3��������� ������(��� 2��� ��������� �� 8&� '&9� 2���� ���� ��������� ������������� ���3 2��� ���� �� ��� ��������= ������ ��� ���� ����� %�� ������������ ��������$ ����� ��$ ��$ �� 8'&9 ���� ��� �����3� �% �H.!� %�� 2��� ������� ��� �����3������ ���I�������� 2���� �����3�� ���� ����� �������� ������� ��������� 2���� %������ ��3�� � ����� ������J� ��������$ +� ��� 3��,���� �� ����3������� ���� %�� �� ��� �� ��%� ���,�� ���,�� �% ���������� �� ��� � ��2 3��,��� �� ���� %���� �% ��������$ /����������� �� 2�� (���� �������,�� ������� ,� ��� �� ��� �� ����� 2��� 8-*9 2���� ���� ��������� ��� ������ �% ��2 <�����3�� ���3����� �������� K<��G ��� ,� ���� �� ������ ��� ��������� �% ��� ����� �� ���� ���3��� ���� ���� �����3��,�� %�� ����3������� ��� ������������$ .� ���� 3�3��� � ���3����� � ����� ��� � �������� �������� �� 3��%������ ,��2��� ��� ������(��� �������� ��������� ������ ������ ��� ��� ���������� 2��� <�� �� ��������� ������� ��� ��� ��2� �������� ,� ��� �������� � ������ ���� ������� ��� �������%�� �����$ ...$ <�0<07! �0!7F�� ������ 3��3���� �� ���� 3�3�� ���� ����� 3�3���� ���� ������������� ������������� �� ��������� ������� ���� ��������� ��� 3�� ���� � ���3����� � ����� �� ��� ���3��� ���������$ �� ��� ��� 3������ ����� ������� ���� ������� �� %�� ���� ��� ������= ���� ��� %��� �� ���� �� ��������� ��� ������� � 3��3��������� 3��������� �� 3��3��� ��� ���� �������� %�� �������$ !����� 3��3���������� ��� ������ ���� �� ��� ���32���� 3������� �3���� ��� ���� ���� 3������3��� ��������� ��� ����� ���� 2���� K������������G$ �� ������ ���� ����� ��� 2���� �� ��� ���� ������ �� ���� �� ������������ � 2��� �� ��� ��������3����� ��</s>
|
<s>��%%����� 2���� �� ���� �% ����������� �� �����3�� ��������� �� ��%%����� %���� �% ��� ���� ������$ �%��� ��� 3��3��������� �� ���� ��� ������ �� �� �� �� ��� %������ ���������� 3��� 2���� �� ��� ������� ��������� �% ��� ��������� �� ����� �� �������� ��� 7������� � ������� �% ��� ��3�� ����$ ��� ��� ������� ���������� ��� ������ �� �(��33�� 2��� & ��%%����� ������� ������(���$ �� ������ ���� %�� ��� ������ ��� ��.!�� /�������� C����� ������� ������� ���I������� 2���� �3�� ������� ��� ������� <������� ������$ ���$ '$ 4���%��2 �% ��� ������ �������� ������������������ ������������������������� �!��"#$�$"�#�$���"$� �%��!�&'�����($%��))�13�� ��� �������%�� ���������� �% ��� %�������� ��� ������ 3������� � &������������ ����� �3�� 2���� <�����3�� ���3����� �������� K<��G �� 3��%����� �� ������ ��� &������������ ���� �� � -������������ ����$ .� 2�� ���� �� ���� ������������ �% ��� ���� ��� ���������� 2�� ,�����$ �� 2��� �� ����� ��� ��������� ���� %�� ���� ��� ����� ��(����� -! ���� ,�� ����� ���� ������J� ������� ������(�� ��� � &! ���3��� �� 2�� ��������� �� ������ ��� ����������$ �� -������������ ���� �� ���� ��,5����� �� ����� ������� K���G� �� ������%� ��� ��������� ���� - ��������� ����� ��� ������� �� ��� � ������� ��������� ���� �� 3������ �� ��� ���3�� �������$�3��� %��� ���� ��� ������ ���� ���� ������� ������������ ������ ������(��� �� �������� - ���� ��������� %�� ���� �������$ ������� �� � %��� �% ������� ���������� ���� �� ���� �� %��� <������� 2���� �������� ���������� �� ���� �� %��� ��� ���� ��3������ ��������� 8'"9$ /���� ��� ������ %���� ��� ��������� ����� �% ��� ��������� ��� ������ � ����� ������� ����� ��� ���� ��3������ ��������� %��� ��� ���$ �� ��������� �� ���������� %�� ���� �% ��� ���������� ���3����� �� 2��� ��� B��� ������ K+���� B�������� ������G ���� �� �������� %�� ���� ��� ������� ��� � ���3����� � ����� �� ��������� ����,����� ��� ������%���� ����������� 2��� ��� ������� ��������$ ��� 3��3���� ������ ��� ,��� ������ 2��� �������� ��2� �������� %��� ��%%����� ������� �������� �������$ ��� ������������ �% ���� ���3����� � ����� �2� ��2� �������� �� � ,��� ���� ��� ��� ���������� 2��� ���3���� �$� ��� ������� ' 8->9 ��� ��� ������� - 8-?9$ A. Preprocessing1) Stemming: ��3�� ��� ������ ��� ���� 2���� ���� ��������� %���� �� ���� �����3�� 2���� ����������� %��� ��� ���� ���� 2��� �� ������� �� ,� ��� ���� 2���$ � �����,���� ������� ������� ������� �� ��3�������� �� 8-'9 ��� ,��� ���� 2���� ��� ���� � ������� 2��� ���� ��� ������� %���$ .% ��� %����2��� 2���� ��� ����� �� �� ����3���L�����J� L����J ��� 2��� ��� ,� ��� ����� �� L���J$ ��F7 .$ 7��./B 01-� �������,��� +�������,��� � �� � ���� ����� ���� ���������� ������ ����� ���� 2) Stop-word Removal: ���� �� 2���� %��� ���� ���������% ���� ��� ���� ���� �����%������ ,�� ��� ���� 3��� �% ��� ��������$ 4���� ���� �� L����J� L��J� L����J ��� ������ %�2 �% 2���� %��� ��� �������� ���� �% ���32���� ���� ��� ,��� ��������� �� ��� ������$ ��F7 ..$ �7<�7.0/ 0� 0<40�! �7�0C�F +������� ����� ������ +������� ������� �������� �������� ��� � �������� � �� ��� �������</s>
|
<s>!" #��� $�%�&� �������� � �� ��� ������� !" #��� $�%�&3) Paragraph Splitting: <������3�� ��� �3��� �� ���3������� ����������� ���� �3�� �������� ��� ������� <������� ������ �� 2��� �3�������$ 4) Sentence Splitting: �������� ��� �3��� �� ��������������� �% ��� ��������� �� ���� ������ ��� ,� �������� �� ���� �������� �% ��� ��������$ 5) Word Splitting (Tokenization): 4��� 3������� �� ������ ���3 2��� ��� �������� ���� ��(���� ��� �������� �% 2��� �����������$ B. Feature Extraction1) TF-IDF: ��.!� ������ %�� ��� ���(������.� ����!������� ���(�����= ���� ����� ��3������� ��� ��3������� �� �����%������ �% � �3���%�� 2��� �� ��� ������ �������� 8'*9$ 2) Numeric Value Based Sentence Scoring: ������������������ ��� ���� �% ��������� ���� �� �� �� � ������ 3������� ���3���� �� ��� ������$ 3) Sentence Length Based: �������� ��� ���3���� �� ���� ����� ������ �% ��������� �� ��� ���� ������� ��� ������ �����������$ 4) Cue/ Skeleton Word Scoring: �������� �������������I�������� 2��� �� �� �� � ����� ���������$ 5) Topic Sentence Scoring: �������� ���������� ���2���� ���� �� � �������� �� ��� ��3�� �������� �% ��� ������� �� ��� ��3�� �������� �% ��� �� �� 3������3� �� �� �� ������ �����$ 6) Sentence Position Based Scoring: �� ��������� �� ���%���� ��� ���� '*M �% 3������3� �� �� �� � ������ �������� 3������� �����$ C. Algorithms1) Fuzzy C-means (FCM) algorithm: .� �� � ��%� ������������������� ,���� �� %���� �����$ .� ��%� ����������� � ���� 3���� ��� ,����� �� �����3�� �������� �� �������� �� ���� ���������� 2���� � ����3���� ��� ,����� �� ���� ��� �������$ 7��� ���� 3���� �� ���������� 2��� � ���,�����3 %�������� 2���� ��3������ ��� ������ �% ��� ���,�����3 �� � �3���%�� �������$ .� ���� 3��3���� ������� �2� �������� �� � ,��� ����� ��������� 2��� ���� ��3������� ��� ��������� 2��� ��2 ��3�������$ �� ����3����� �$� ��� ���������� 2���� ,����� �� ��� ���� ��3������� ������� ��� �������� ��� ��������� �� ������ � �������$ ��� ������ ���� ��� ��� ��������� 2��� � %�������� 3�������� �% ���� - �� 8--9 %�� ��3�����������$ 2) TextRank Algorithm: ����� �� ��� 3�3���� <����������������= ���� ��������� ������� � ������ ���������� ������ �% ���� �������� ��� ������� ��� ��3 >*M �% ��������� �� ��� 7������� � ������$ �� ������ ���� �� ��������� 2���� 2�� ��3�������� ����� <+< 8-)9$ 3) Aggregate Sentence Scoring: ��� ��� ��� ����� �% ������������ �� ����� ��� �������� ��� ����� �� � ���������� ����� ��� ��� ��3 >*M �% ��� ��������� ��� 3������ �� ��� 7������� � �������$ �,�� ...$ ���2� � ���3�� K��3 %� � ���������G �% ��� ��������� ������ ���������� %�� ��� ������� '$ �������� ������������������ ������������������������� �!��"#$�$"�#�$���"$� �%��!�&'�����($%��)))��F7 ...$ 7/�7+7.� �BB�7B��0�7 �0� .�F7 ' +�������� *���������+���� ��������� ������ ��� ���� ���� L����� ��� ������ ��� �� �� ��� ���—�� ��� ���� ����� �!��� "#$% ���� &���>$&-@ '()* ����� ��+���� "� �� ,-� ����� ��� �����, � .( �/ � 0' 1���� ���� �� )$A?A 2 � � ����� ��% /3�� �� ��� "�� #��%�—������ � ����� ��4� ����� ��� ��5� �� ���� )$?'? ��� ��%5 �!������� "�� ����� 6���� ,4�� ����� ���� ���� �% �7�� 4��</s>
|
<s>��� �# ���� ���� ���� ��%����8��� �� 69�� :�; #��%�� �!����)$>> .C$ �0�<��.0/ �/! �7 �/�FE�01B7 8"9 �� � ������ ������ �� ���3��� ������� ��������� ��������� �� ����������� ������� � ��%������ ��������� �$�$� B��� ������$ �01B7 ����� �� �������� � ������ ���� ���� ���������� ��� �������� �% ��� ��������� ������� ,� ���������� � ����� �% � ����33��� ���������$ ��� ��� � �������� �% ��� ������J� ������� ��������� %��� ��� ) ��%%����� �������� ��� �01B7 ������� 2�� ����$ 4��� �� ���� ��� �� ���3���� ��� ������� ��������� ,� ��� ������ 2��� ��� ��%������ ������� K+�����3�������G$ .� ��� �2� �������� %�� � ��������6'G ������ ��� -G <��������$ F������ ��� �' ������� 2���� �� � ������� �% � ����J� �������� �� ���������� ����� ,��� ������ ��� 3�������� �����$ �� ����� ��� ���������� 2��� �(������� K'G� K-G ��� K)G$ � ����� �% * ����� ��� ���� ������� ��� 2���� ������ 2���� ' ������ %�� ��� ,���$ :�" ��/� 4� ����� ��� &�� � ��� ��< '()* ����� ��+���� "� �� ,-� ����� ��� �����, � .( �/ � 0' 1���� ���� ��< �-� �� ��+���� ����� ��� �����, � *= 1���� :�; ��< 2 � � ��� ��% /3�� �� ��� "�� #��%�—������ � ����� ��4� ����� ��� ��5� �� ����< � ��� &��� "#$��� �� ?� ����� ����� ��� �����, � )(( 1���� :;�< ����� ����� @ ����A ��B �" ��/� 4� �CD ����� ���< EF�� �� ����� ��� ���� . �/ �. /��G/< EF�� '()H ����� ��4� �C�D� ��� �$ �I .J 1���� ���� &��< ��� ��K� �� �/%���� ��� "�� #��%�% EF�� 1�� �L����� )*H "�%D �� #�%< ������ �"� ���4�M� &��� �� &/3�% ��N � &��� ��"6�O ��/?��� ����� :P"��� � ��% ��%< #�O��Q ��"� 6���� �8��� �� ��/� ���� �-� ��� ���� ��% ��� ��� �!��� ����� ����� ���5� ��� #�%< ��� �!��� ����� ��� F��� ��� ��� � !�%��% ��� ��R ��%��< ��� ��& -$�!��� ,S T ��% &��� &/3� @� � ��%��< �� ����� "� � ��� �6���"�� �F����� !�� &/8$�� ��% �����< ��������� ������ ��� ���� ���� L����� ����� ������ ��� �� �� ��� ���—�� ��� ���� ����� �!��� "#$% ���� &���< ��� ��%5 �!������� ��� ���� �� � ,� U�� �� ��� > �� " ���� **( �� 1��� �� ��% ����< #�O���Q EF�� � ����� V��� �� ��� �� ' �/ � .H 1���� #� � ��� &��� �� '.H0 1���< �� " � X ��� ��� :P"�����8 ��/?��� &���8 ���� 6�%��% ��" � ���#��8 ��/?���� @� ���% &��<: ���$ -$ B��� ������ %�� ��� ������� ' ��F7 .C$ /1��7� 0� �0��0/ 7/�71����.7B7/7��&���)���� *���������+�������$'��*������.� '* '* '- *������/� @ " '* ��F7 C$ �0�<��.0/ �7477/ ��/1��7��<�7�..0/ �/! �7��FF �0� .�F7 - ������� ���������������������� ������ ������ ������������� ������ ��� �� ��������� ������ ����� �����F7 C.$ �0�<��.0/ �7477/ ��/1��7��<�7�..0/ �/! �7��FF �0� .�F7 ' ������� ���������������������� ������� ���� ���������� ���� ��� ����� �� ��������� �������� ������ ���� ������� �������������������������������� �������������������������� K'G ���� ! "#� �������������������������������� ���������������������� K-G $% � & '�������')������������*)����� K)G �� ��3�������� ,� �,��</s>
|
<s>.C ��� ��������� ,� ���$ )� �� ,��� ���� ����� K��� ������� ' ��� -G� ��� ������ � ������ ���,�� �% ������ ���������$� ��� ����� ���� ��� ��������� ������� ��������� ���� �� � � ������ 3��,�,����� �% �������� ���� ��3������� %��� ��� ��3�� �������$ 4��� 5������ ��� �������� �% ���������� �� ����������� �� ���$ > ��� ���$ ? 2� ��� ���� �� �2� %������� ��� �' ������� ��� ������ ��������$ ��� ��� %���� �������� 2� ������ � ������ �' ������� %�� ��� ������� ���� ,��� ��������� ������� ��� �������$ ��� �� ,����� �3 ,� ��� %��� ���� ��� ������� ��������� ���� ������ ��������� ���� ,��� ������� ��� ��������� �������$ ��� ��������� - ���� ���� ��� ��������� ���� ������� ��� ��������� ������� 2���� ������� �� � ������ �' �������$ �� ��������� �% ��� ������� ' ��������� ,� ���� ������� ��� ��������� ������ ������ ��� ���2� �� ���$ &� ���$ @ ��� ���$A ���3���� ���$ ���$ )$ ��� ����� ���3����� ��� ���,�� �% ������ ��������� �� ��� ��������� �������� ������������������ ������������������������� �!��"#$�$"�#�$���"$� �%��!�&'�����($%��))*���$ >$ ��� ����� %�� ��� ������� ' ���$ ?$ ��� ����� %�� ��� ������� - :'()* ����� ��+���� "� �� ,-� ����� ��� �����, � .( �/ � 0' 1���� ���� ��< 2 � � ����� ��% /3�� �� ��� "�� #��%�—������ � ����� ��4� ����� ��� ��5� �� ����< ���� &��� "#$��� �� ?� ����� ����� ��� �����, � )(( 1���� :;�<��� ��%5 �!������� "�� ����� 6���� ,4�� �������� ���� �% �7�� 4�� ��� �# ���� ���� ���� ��%����8��� �� 69�� :�; #��%�� �!����<��� Y���G �Z � O�� &/� ����� &���8 ���� 6�%��% ��" � ���#��8 ��/?���� @ ��N � &��� ����� ����� ����� #�-Z ���� ���G ���� �!�� ��5� ��[%8 �� ��� �� #��%� ��\� �� $ � �, ��1D �1���] ^�K ��"�� ��N � :P"��� �� ���� ��" ��B�< &� �_ � � � �#6�� ��N � &��� ,/G�� ����� ���� ��%����8��� ��� /3�� ��N � &�:P"��� ���� ���� ��< EF�� '()H ����� ��4� �C�D� ��� $ �I .J 1���� ���� &��< ��� ��K� �� �/%���� ��� "�� #��%�% �� 1�� �L��� ��� )*H "�%D �� #�%<������ �"� ���4�M� &��� �� &/3�% ��N � &��� ��"6�O ��/?��� ����� :P"��� � ��% ��%< 2 #�O��Q ��"� 6���� �8��� �� ��/� ���� �-� ��� ���� ���% ��� ��� �!��� ����� ����� ���5� �#�%<��� �� & -$�!��� ,S T ��% &��� &/3� @� � ��%��< ������ @ �-$�8 �� �� �� / O V��� � ����� �!�� ����� ��%�� �`��� ���< ��������� ������ ��� ���� ����L����� ����� ������ ��� �� �� ��� ���—�� ��� ���� ����� �!��� "#$% ���� &���< ��� ��%5 �!������� ��� ���� �� � ,� U�� �� ��� > �� " ���� **( ��� 1��� �� ��% ����< #�O���Q EF�� � ����� V��� �� ��� �� ' �/ � .H 1���� #� ��� &��� �� '.H0 1���<�� " � X ��� ��� :P"�����8 ��/?��� &���8 ���� 6�%��% ��" � ���#��8 ��/?���� @� ��&��<: ���$ &$ ������ ��������� ,� ����� ������� ��������� :�" ��/� 4� ����� ���</s>
|
<s>&�� � ��� ��< ���� #�O���Q ����� ��� * /��G/ ����< '()* ����� ��+���� "� �� ,-� ����� ��� �����, � .( �/ � 0' 1���� ���� ��< �-� �� ��+���� ����� ��� �����, � *= 1���� :�; ��< 2 � � ����� ��% /3�� �� ��� "�� #��%�—������ � ����� ��4� ����� ��� ��5� ����< � ��� &��� "#$��� �� ?� ����� ����� ��� �����, � )(( 1���� :;�< �Y� ����� �� "� � ��� ���Y ��R��� "��� �U�� 6�R! "����< ����� ����� @ ����A ��B �" ��/� 4� �CD ����� ���< EF�� �� ����� ��� ���� . �/ � . /��G/< �/6�� � ����� � �"��� �/%���� ��� a /��G/ "�� #�%< 2 #�O��Q ��"� 6���� �8��� �� ��/� ���� �-� ��� ���� ���% ��� ��� �!��� ����� ����� ���5� ��� #�%< ��� �!��� ����� ��� F��� ��� ��� � !�%��% ��� ��R ��%��< ��������� ������ ��� ���� ���� L����� ����� ������ ��� �� �� ��� ���—�� ��� ���� ����� �!��� "#$% ���� &���.J ����� ��� �6����� ���� ����� ���T ��% �����< �� ����� "� � ��� �6���"�� �F����� !�� &/8$�� ��% �����<: ���$ @$ ������ ��������� ,� ��� ������� ��������� :'()* ����� ��+���� "� �� ,-� ����� ��� �����, � .( �/ � 0' 1���� ���� ��< 2 � � ����� ��% /3�� �� ��� "�� #��%�—������ � ����� ��4� ����� ��� ��5� �� ����<��� ��%5 �!������� "�� ����� 6���� ,4�� ����� ���� ���� �% �7�� 4�� ��� �# ���� ���� ���� ��%����8��� �� 69�� :�; #��%�� �!����<��� Y���G �Z � O�� &/� ����� &���8 ���� 6�%��% ��"� ���#��8 ��/?���� @ ��N � &��� ����� ����� ����� #�-Z ���� ���G ���� �!�� ��5� ��[%8 ��< �� ��� �� #��%� ��\� �� $ � �, ��1D �1���] ^�K ��"�� ��N � &� :P"��� �� ���� ��" ��B�< &� �_ � � � �#6�� ��N � &��� ,/G�� ��������� ��%����8��� ��� /3�� ��N � &�� :P"��� ���� ���� ��< EF�� '()H ����� ��4� �C�D� ��� �$ �I .J 1���� ���� &��< ��� ��K� �� �/%���� ��� "�� #��%�% EF�� 1�� �L����� )*H "�%D �� #�%<������ �"� ���4�M� &��� �� &/3�% ��N � �� ��"6�O ��/?��� ����� :P"��� � ��% ��%< 2 #�O��Q ��"� 6���� �8��� �� ��/� ���� �-� ��� ���� ���% ��� ��� �!��� ����� ����� ���5� ��� #�%<��� ��& -$�!��� ,S T ��% &��� &/3� @� � ��%��< ������ @�-$�8 �� �� �� / O V��� � ����� �!�� ����� ��%�� �`����< ��������� ������ ��� ���� ���� L����� ����� ������ ��� �� �� ��� ���—�� ��� ���� ����� �!��� "#$% ���� &���<J ����� ��� �6����� ���� ����� ���T ��% �����< #�O���Q EF�� � ����� V��� �� ��� �� ' �/ � .H 1���� #� ��� &��� �� '.H0 1���<�� " � X ��� ��� :P"�����8 ��/?��� &���8 ���� 6�%��% ��" � ���#��8 ��/?���� @� ��&��<: ���$ A$ ������ ��������� ,� ��� ��������� ������ ������$ C$ �0/�F1.0/�� ��� 2���� 3��������� �� ���� .�%�������� ��������� ���� �������� �� ��� ������� �������� ,������ ���� ��� ���� ��3������$ � ���� �������������</s>
|
<s>������ ����� �����%������ ,������ �% ��� ��3������� �% �� ��� ����� �%%��� ��� ���� ����$ ��� ������������� ��� �� � �2� ������� �% ��������6 �������� � �������������� ��� �,������� � �������������$ 4���� ��� ���3�� %��� ��� �,������� � ������ �% ������������� �� ���� ������� ��� ��������� �� ����� ���� 3��������� ��� ��� ���3������ �% ��� 3������ �� ��� ����$ �� ����� ��� �������� � ������ �� ������������� 3�� ���� � ������� �������%% ���� ��� �� ��� ��2�� ���3��������� ��(���������$ .� ���� 3�3��� � ���3������ ��2 �33����� �� ������� ��� ������������ ��� ,��� 3��3����� 2���� ���� ��� ��� ���������$ �� ��� ,���� ��������� ���� ��� & �������� ������������������ ������������������������� �!��"#$�$"�#�$���"$� �%��!�&'�����($%��))+�������� ������� ������� �� %��� ��� ���� ��3������ ���������$ ��� ��� 3��3��� �% � ���3����� � ������ ������� ��������� 2�� ���� �� �������� � ������� ����� 2��� �� ��������� ������� ���������$ ������� ��������� �� � 3�3���� ���� �� ��� %���� �% /F<� ,�� ���� �� ��� %���� ���� �� ��� ,��� ���� %�� ������� ��� ������������$ ������� ��������� ���� � ���������� ������� �� %��� ��� ���� ���%�� ��������� �� �� �������$ �� ��������� ������ ��������� ���� ���� ��� & ������� ������� ��� ��� ��� �% ��� & ������ %��� ����� ������� ��� ����� �� ��������� ��������� ������ %�� ���� ��������$ ���� ������ ��� ���� %������ ������ �� ���������� ������ ��� ���� ��� ��3 ������� ��������� ��� ���� 3������ �� �������� ����� �� %��� �� ��������� �������$ �� ��� ,���� ��������� ����� �� ������ ������ �����,�� �� 2��� �� � ������ ���,�� �% ���� ��� ��������� K��������� ���� ��� ���� %���� �� ��� B��� �������G$ �7�7�7/�78'9 �������� /$� N ��2���� F$ �$ K-*'&� ������G$ �� � �� ��2 �% ����������������� ������(���$ .� Computing Communication Control and automation (ICCUBEA), 2016 International Conference on K33$ '�@G$ .777$8-9 ���������� /$� N ����������� $ K-*'@� O������G$ � ��� �� ���������� � ���� �������������$ .� Computer, Communication andSignal Processing (ICCCSP), 2017 International Conference on K33$ '�&G$ .777$ 8)9 ���������� /$� N ����������� $ K-*'&� �����G$ � ��� �� ���,������� � ���� �������������$ .� Circuit, Power and Computing Technologies (ICCPCT), 2016 International Conference on K33$ '�@G$.777$8>9 !���� O$ �$ K'"@)G$ � %���� ������ � �% ��� .0!�� 3������ ��� ��� ��� �� ��������� ���3��� 2������3������ ��������$8?9 ��������� �$� N ����� <$ K-**>G$ �������6 �������� ����� ���� ����$.� Proceedings of the 2004 conference on empirical methods in naturallanguage processing$ 8&9 �,�5��� $� +����� �$� ������ �$ $ .$� N +������� $ �$ K-*'@� O���G$� ��������� �33����� �% ���� ������������� %�� ������� �������������$ .� Computing, Communication and Networking Technologies (ICCCNT), 2017 8th International Conference on K33$ '�AG$ .777$8@9 D������ ���� <$� N ������������� $ �$ K-*'@� O���G$ ��������� ����������������� ,� ����� ������� ��� ������� %�� ��3�� ��� ���������$.� Computing Methodologies and Communication (ICCMC), 2017 International Conference onK33$ ?"�&>G$ .777$8A9 C�5��� $� ���� C$� B�3��� $� C�5�� ������ �$� N ������ !$ �$ K-*'@� !����,��G$ 7������� � ���� ������������� �� �����$ .� Asian Language Processing (IALP), 2017 International Conference on K33$ )'A�)-'G$.777$8"9 F��� �$ E$ K-**>G$ �����6 � 3������ %�� ���������</s>
|
<s>� �������� �% ���������$ ��� ������������ �������� 0��$8'*9 F���� +$ <$ K'"?AG$ �� ��������� �������� �% ���������� �,�������$ IBMJournal of research and development� 2K-G� '?"�'&?$ 8''9 .����� �$ $� N �� ������ $ �$ K-**>� !����,��G$ �����6 � ���3���,���� ��%�������� ������ �� ��� ���������� %�� ,������ ����$ .�Proceedings of the 7th International Conference on Computer andInformation Technology$ 8'-9 1����� �$ /$� N D���� $ �$ K-**@� !����,��G$ � ����� �� ����������������� ������(��� ��� ��3������ %�2 �% ���� %�� ��������������$ .� Computer and information technology, 2007. iccit 2007.10th international conference on K33$ '�>G$ .777$ 8')9 7%��� �$ .$ �$� .,������ �$� N D������ +$ K-*')� ���G$ ��������� ������ ���� ������������� ,� �������� ������� ��� �������$ .�Informatics, Electronics & Vision (ICIEV), 2013 International Conference on K33$ '�?G$ .777$8'>9 !��� �$� N ������3������� $ K-*'*� ������G$ �3���,���� ������� �3����� �������������$ .� Proceedings of the 23rd InternationalConference on Computational Linguistics: Posters K33$ -)-�->*G$����������� %�� ���3��������� F����������$8'?9 ������ D$ K-*'-G$ ������� ���� ������������� ,� �������� ����������$ arXiv preprint arXiv:1201.2240$ 8'&9 ������ $� ���� �$ $� 1����� �$ <$� +������� �$ !$� ���� $ D$� N �%5����$ .$ K-*'@� ��,�����G$ �� �������� � ���� ������������� ������(�� %�� ������� �������� K�G ����� D������ ���������� ���������$ .� Imaging, Vision & Pattern Recognition (icIVPR), 2017 IEEE International Conference onK33$ '�&G$ .777$8'@9 <����� !$ �$� N !������ E$ C$ K-*'?G$ � %���� �33����� %�� ���� ������$ IJ Mathematical Sciences and Computing� 4� )>�>)$ 8'A9 ������� O$ �$ K'"A'G$ 0,5���� � %������� ����������$ .� Pattern recognition with fuzzy objective function algorithms K33$ >)�")G$ 3������� ������� ��$ 8'"9 F��� ����� �$ /$� N ������ �$ !$ K-*''G$ Google's PageRank andbeyond: The science of search engine rankings$ <�������� 1�� ������ <����$8-*9 ���� $ K-*'@G$ � ��,��� ��,��� %��2 ������ ������ �������� �����,���� �� <�� ��� �F�����$ Revista de la Facultad de Ingeniería�31K"G$ 8-'9 D����� �$ K-*'>G$ ��%�������I�������������$ 8������9 B��+�,$ � ����,�� ��6 ���3�6II�����,$���I��%�������I�������������$8--9 O��� 4������ O���� ������� �������%����� �2������ ��������� �$ ����2���� 1������������ P +������� ������$ K-*'@� 0���,�� &G$ O!4�����I�������%����6 ����������� *$)$' KC������ *$)$'G$ Q�����$���6'*$?-A'I������$'**-">&8-)9 !� ���������$ K-*'A� 0���,�� *AG$ !� ���������I<+<���������������$ ������ �� %��� ���3�6II�����,$���I!� ���������I<+<��������������� 8->9 .( 1���� ���� �� ����� ���. (-*'A� /� ��,�� -&G$ �� !���� <������ ���$ ������ �� %������3�6II222$3���������$���I�������I�������I'?&&A""I.(-1����-����-��-�����-��� 8-?9 ������ �. (-*'A� /� ��,�� -@G$ />� � �%� �8�. �� !���� <������ ���$ ������ �� %������3�6II222$3���������$���I�������������I�������I'?&&">?I />�--� �%�-�8� 8-&9 !����� C$� N ������ F$ K-*')� !����,��G$ � ��� �� �% �������� � ��� �,������� � ���� ������������� ������(���$ .� -*') &�� .������������ ���%������ �� 7������� ����� �� 7���������� ��� ��������� K33$'*"�''*G$ .777$�������� ������������������ ������������������������� �!��"#$�$"�#�$���"$� �%��!�&'�����($%��)),</s>
|
<s>Paper Title (use style: paper title)See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/338358097An Approach for Bengali Text Summarization using Word2VectorConference Paper · July 2019DOI: 10.1109/ICCCNT45670.2019.8944536CITATIONSREADS1733 authors:Some of the authors of this publication are also working on these related projects:Sentence Based Topic Modeling using lexical analysis View projectDOORMOR View projectSheikh AbujarDaffodil International University52 PUBLICATIONS 83 CITATIONS SEE PROFILEAbu Kaisar Mohammad MasumDaffodil International University8 PUBLICATIONS 1 CITATION SEE PROFILESyed Akhter HossainDaffodil International University99 PUBLICATIONS 476 CITATIONS SEE PROFILEAll content following this page was uploaded by Abu Kaisar Mohammad Masum on 08 January 2020.The user has requested enhancement of the downloaded file.https://www.researchgate.net/publication/338358097_An_Approach_for_Bengali_Text_Summarization_using_Word2Vector?enrichId=rgreq-f65e43b5a58e91328083df3f4231e574-XXX&enrichSource=Y292ZXJQYWdlOzMzODM1ODA5NztBUzo4NDUwNDY0NzU2ODE3OTNAMTU3ODQ4NjE4ODY0NQ%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/338358097_An_Approach_for_Bengali_Text_Summarization_using_Word2Vector?enrichId=rgreq-f65e43b5a58e91328083df3f4231e574-XXX&enrichSource=Y292ZXJQYWdlOzMzODM1ODA5NztBUzo4NDUwNDY0NzU2ODE3OTNAMTU3ODQ4NjE4ODY0NQ%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Sentence-Based-Topic-Modeling-using-lexical-analysis?enrichId=rgreq-f65e43b5a58e91328083df3f4231e574-XXX&enrichSource=Y292ZXJQYWdlOzMzODM1ODA5NztBUzo4NDUwNDY0NzU2ODE3OTNAMTU3ODQ4NjE4ODY0NQ%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/project/DOORMOR?enrichId=rgreq-f65e43b5a58e91328083df3f4231e574-XXX&enrichSource=Y292ZXJQYWdlOzMzODM1ODA5NztBUzo4NDUwNDY0NzU2ODE3OTNAMTU3ODQ4NjE4ODY0NQ%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-f65e43b5a58e91328083df3f4231e574-XXX&enrichSource=Y292ZXJQYWdlOzMzODM1ODA5NztBUzo4NDUwNDY0NzU2ODE3OTNAMTU3ODQ4NjE4ODY0NQ%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Sheikh_Abujar?enrichId=rgreq-f65e43b5a58e91328083df3f4231e574-XXX&enrichSource=Y292ZXJQYWdlOzMzODM1ODA5NztBUzo4NDUwNDY0NzU2ODE3OTNAMTU3ODQ4NjE4ODY0NQ%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Sheikh_Abujar?enrichId=rgreq-f65e43b5a58e91328083df3f4231e574-XXX&enrichSource=Y292ZXJQYWdlOzMzODM1ODA5NztBUzo4NDUwNDY0NzU2ODE3OTNAMTU3ODQ4NjE4ODY0NQ%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Daffodil_International_University?enrichId=rgreq-f65e43b5a58e91328083df3f4231e574-XXX&enrichSource=Y292ZXJQYWdlOzMzODM1ODA5NztBUzo4NDUwNDY0NzU2ODE3OTNAMTU3ODQ4NjE4ODY0NQ%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Sheikh_Abujar?enrichId=rgreq-f65e43b5a58e91328083df3f4231e574-XXX&enrichSource=Y292ZXJQYWdlOzMzODM1ODA5NztBUzo4NDUwNDY0NzU2ODE3OTNAMTU3ODQ4NjE4ODY0NQ%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Abu_Mohammad_Masum?enrichId=rgreq-f65e43b5a58e91328083df3f4231e574-XXX&enrichSource=Y292ZXJQYWdlOzMzODM1ODA5NztBUzo4NDUwNDY0NzU2ODE3OTNAMTU3ODQ4NjE4ODY0NQ%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Abu_Mohammad_Masum?enrichId=rgreq-f65e43b5a58e91328083df3f4231e574-XXX&enrichSource=Y292ZXJQYWdlOzMzODM1ODA5NztBUzo4NDUwNDY0NzU2ODE3OTNAMTU3ODQ4NjE4ODY0NQ%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Daffodil_International_University?enrichId=rgreq-f65e43b5a58e91328083df3f4231e574-XXX&enrichSource=Y292ZXJQYWdlOzMzODM1ODA5NztBUzo4NDUwNDY0NzU2ODE3OTNAMTU3ODQ4NjE4ODY0NQ%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Abu_Mohammad_Masum?enrichId=rgreq-f65e43b5a58e91328083df3f4231e574-XXX&enrichSource=Y292ZXJQYWdlOzMzODM1ODA5NztBUzo4NDUwNDY0NzU2ODE3OTNAMTU3ODQ4NjE4ODY0NQ%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Syed_Hossain5?enrichId=rgreq-f65e43b5a58e91328083df3f4231e574-XXX&enrichSource=Y292ZXJQYWdlOzMzODM1ODA5NztBUzo4NDUwNDY0NzU2ODE3OTNAMTU3ODQ4NjE4ODY0NQ%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Syed_Hossain5?enrichId=rgreq-f65e43b5a58e91328083df3f4231e574-XXX&enrichSource=Y292ZXJQYWdlOzMzODM1ODA5NztBUzo4NDUwNDY0NzU2ODE3OTNAMTU3ODQ4NjE4ODY0NQ%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Daffodil_International_University?enrichId=rgreq-f65e43b5a58e91328083df3f4231e574-XXX&enrichSource=Y292ZXJQYWdlOzMzODM1ODA5NztBUzo4NDUwNDY0NzU2ODE3OTNAMTU3ODQ4NjE4ODY0NQ%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Syed_Hossain5?enrichId=rgreq-f65e43b5a58e91328083df3f4231e574-XXX&enrichSource=Y292ZXJQYWdlOzMzODM1ODA5NztBUzo4NDUwNDY0NzU2ODE3OTNAMTU3ODQ4NjE4ODY0NQ%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Abu_Mohammad_Masum?enrichId=rgreq-f65e43b5a58e91328083df3f4231e574-XXX&enrichSource=Y292ZXJQYWdlOzMzODM1ODA5NztBUzo4NDUwNDY0NzU2ODE3OTNAMTU3ODQ4NjE4ODY0NQ%3D%3D&el=1_x_10&_esc=publicationCoverPdfAn Approach for Bengali Text Summarization using Word2Vector Sheikh Abujar Dept. of CSE Daffodil International University Dhaka, Bangladesh sheikh.cse@diu.edu.bd Abu Kaisar Mohammad Masum Dept. of CSE Daffodil International University Dhaka, Bangladesh mohammad15-6759@diu.edu.bd Md Mohibullah Dept. of CSE Comilla University Cumilla,Bangladesh mohib.cse.bd@gmail.com Ohidujjaman Dept. of CSE Daffodil International University Dhaka, Bangladesh jaman.cse@diu.edu.bd Syed Akhter Hossain Dept. of CSE Daffodil International University Dhaka, Bangladesh aktarhossain@daffodilvarsity.edu.bd Abstract— Text Summarization is one of the mentionable research areas of Natural language processing. Several approaches have already been developed in this concern. Such as – Abstractive approach and extractive approach. Most recent recurrent neural network methods are producing much better results. Several mentionable research has already been discussed for English language summarizer, but a few have already done for the Bengali language. There are so many prerequisites for data analysis purpose -word2vector is one of them. Understanding the vector representation of any text leads the way to identify the key main points of that specific text and helps to measure the relationship of that text with other texts in similarity/dissimilarity [11]. Generated matrix using word2vector can easily applicable for identifying top-ranked sentence/words, either domain specific or in general form. In this paper, a word2vector approach has been discussed in the context of text summarization for the Bengali language. Keywords— Word2vector, Natural Language Processing, Text Summarization, Bengali text analysis. I. INTRODUCTION Word2vec is one kind of neural network that uses two layers to process data. Because of that though it is a neural network, but not a deep neural network, as deep neural network uses more layer than word2vec. In a word2vec it uses text corpus as input data and as outcome it returns set of vector. It can feature vectors for different words in the corpus given to the system. Word2vec turns general input corpus data into numerical form, so that the deep net can understand it and data can be analyzed easily. General parsing is not the only application of word2vec, moreover it extends beyond it. Word2vec can be applied in different things, some of them are social media graphs code, playlists, and other verbal or symbolic series, because in those kind of data patterns can be discerned. Because like other different text data words are also the simple discrete state. For example, the probability of being co-occur. If usefulness and purpose of word2vec is considered, it can be said that it group the</s>
|
<s>vectors of similar words and according to the match it combine them together in vector space. It uses mathematical terms to find out similarities. Without human intervention word2vec can create word vectors. Those vectors are mainly distributed numerical representations, which is similar as word features such as the context of individual words. If enough data is provided to it, word2vec can produce extremely highly accurate output data. In a word2vec neural network the output is a vocabulary. In this vocabulary each item is attached with a vector. Deep neural network uses it to find relationship between words [12]. II. LITERATURE REVIEW Word2vec is a very large topic to explore and development. All over the people many people are working on this topic to improve the result as well as analysis process. Based on the research on this topic the study can be divided into two parts. One of them is development and another one is application. In this section few research will be discussed shortly. Wang Ling, et al (2015) worked on Syntax problems for word2vec adaptation [1]. They presented a model that contains two simple modifications in the worldwide popular word2vec tool. They did this in order to generate embedding more suited to tasks that are involving syntax. In their research they proposed a model to improve parts of speech tagging and dependency parsing. Researcher Dongwen Zhang and his team worked on Chinese language comment’s sentiment analysis using word2vec [2]. They also used SMV in their research. In their research they used combined approach of SMV and word2vec in order to extract semantic relationships between words. Joseph Lilleberg et al (2015) worked on SVM and word2vec and their focus was text classification [3]. This work IEEE - 4567010th ICCCNT 2019 July 6-8, 2019, IIT - Kanpur, Kanpur, India explains several usefulness of word2vec. They have stated tf-idf and word2vec can be used together and the result is much improved. Because alone word2vec provides complementary features that is not more efficient than their model. Researcher Bai Xue et al (2014) had worked on sentiment computing as well as classification from the data collected from sina weibo using word2vec [4]. According to their research as weibo is used by a good amount of people now a days, therefore they proposed a model to analyze the sentimental state of text data in weibo. Chinese researcher Yao Yao and his team worked on Sensing spatial distribution of land which is developed in combination of points-of-interest with the help of Google Word2Vec model [5]. Satwik Kottur et al (2016) used word2vec for embedding visually grounded word [6] throughout a learning model. In this research a new proposed model has been developed, which helps to learn the visually grounded word embedding’s so that it can be used to capture semantic relatedness visual notions. In 2015 Long Ma and Yanqing Zhang was researching on processing of big data using word2vec [7]. In this research they first trained the data with the help of Word2Vec model and</s>
|
<s>after that evaluated similarity of the w Andi Rexha’s team’s research topic in 2016 was sentiment classification of tweet data using word2vec [8]. They presented a Word2Vec approach in order to autonomously predict the polarity class emotion of a target phrase of a tweet. On the other hand, Shihao Ji, et al worked on parallelizing word2vec in shared and distributed type of memory [9]. In this research they improved the algorithm of word2vec for parallelizing the word2vec in memory system. Bangladeshi researcher Md. Al-Amin et al. worked on Sentiment analysis and their target data was Bengali comments which was analyzed by Word2Vec technique [10]. They used this techniques to extract sentiment related information from text data. In their research they used word2vec for sentiment classification of Bengali comments using a new model and on the other hand they have extracted the sentiment using word2vec, word co-occurrence score with the sentiment polarity score. Word embedding is important for text analysis. It carries the numerical value of the related word in a document file. This embedded word file helps to analysis any text documents such as making text summarizer, bi-directional text generation etc. There are several pre-trained word embedded file present different kinds of language but few of word embedded file in our Bengali language. So our main intention in this paper to rich NLP research for our Bengali language and introduce a method for making a word embedding for the Bengali language. III. METHODOLOGY Word2vec is used to produce word embedding. There are some reasons why we used word embedding such as the concept of a word is not understood by a machine. A machine can understand only binary or numerical value. So, process a language and working with natural language processing word2vector must be needed. When applying word embedding every machine can convert tokenize word to a vector where each vector represents vocabulary of text documents. Word2vec contain 2 layers of a neural network which is not deep. It contains several dimensions with unique word of the text document. Our workflow is given below in figure 1 for making Bengali text word2vec representation. Figure1: Working flow for Bengali Word2vec Representation A. Data Collection and Preprocessing Our dataset contains 1k Bengali news article and their summary of each article. We collect data from online news portal and social media pages. For word embedding, we use clean Bengali text and tokenized prepare them for input of the neural network and neural network provide a numerical value for each word. Where every vector represents our dataset vocabulary. Before embedding word, we need to processing data. Processing of Bengali text data is quite difficult such as remove space from word or sentence, remove unwanted character etc. At first, we need to add Bengali contraction in dataset word because of contraction uses the short form of the word, but we need a full form of the word for embedding. Table1: Example of Bengali contractions. Then we split the text and remove unwanted things such as</s>
|
<s>whitespace, Bengali digits, English character, and punctuation and remove stop words from the text which are unnecessary. Finally, create a clean text with a summary for use as an input of the model B. Word2vec Model Word2vec contains a shallowest neural network use to learn the status of word in a text document. Each vector represents a SHORT FORM FULL FORM মি. মিস্টার ররজি: ররজিস্ট্রেশন ডা. ডাক্তার IEEE - 4567010th ICCCNT 2019 July 6-8, 2019, IIT - Kanpur, Kanpur, India word with a numerical value and provides a semantic description of document words. We used 2 methods for word embedding one is Skip-Gram and another is Continuous Bag of Word Model (CBOW). Continuous Bag of Words model is used word context to predict a target word corresponding to the context. Skip-gram is using a word to predict the value of the target or goal word context. a. Problem Assertion: Word2vec are found the similarity and dissimilarity of the words containing in the dataset. In the word vector, it uses the word offset technique which contains general algebraic operation. Let consider, the vector of Bengali words such as the vector ‘রািা’ - the vector ‘পুরুষ’ + the vector ‘নারী’ now the result will be nearest vector value of ‘রানী’. This paper we have tried to discuss about how easy to produce a Bangla word to vector for working text documents, how Bengali word vector represents the similarity, dissimilarity of the words and their relationship with each other in vector context. b. Skip-Gram Model This model tries to identify the words based on other words in a similar sentence. As input, we use the current word as an input with a hidden projection layer which predicts the word in the range. Distance words are less related where current words are closely related. The formula for the model is, 𝑄 = 𝐶 × (𝐷 + 𝐷 × log2(𝑣)) (1) Here, C=Maximum distance of word Figure2: View of Skip-Gram Model c. Continuous Bag of Word Model (CBOW) Here hidden layer is removed and projection shared all words where all words get the same position. This way is called Bag of word. But its continuously distributed word that's why called Continuous Bag of Words model. The formula for the model is, 𝑄 = 𝑁 × 𝐷 + 𝐷 × log2(𝑣) (2) Figure3: View of Continuous Bag of Word Model (CBOW) IV. EXPERIMENT AND OUTPUT Working with Bengali text is very challenging every time such as read Bengali dataset, Bengali text processing, remove stop word, regular expressions for Bengali clean text. But after all of those step successfully completes and we apply a clean text for Bengali word embedding. We train those words using Continuous Bag of Word Model (CBOW) and Skip-Gram Model and for visualizing the word we use T-distributed Stochastic Neighbor Embedding (TSNE). Provided some table in below which contains our experiment result and where we show the similarity of the word, the most similarity of the word, does not match between words, get</s>
|
<s>the related term of the word. Table1: Similarity of two words Algorithm1 for genism Skip-gram model 1: import model 2: Define Word2Vec(size, window, minimum count, sg, workers, hs, negative) 3: Build vocabulary(input text) 4: Define train(sentences=input text, total examples=length of input text, epochs number) 5: End Algorithm2 for genism CBOW model 1: import model 2: Define Word2Vec(Input text, window, minimum count, workers) 3: Define train(input text, total examples, epochs number) 5: End Similar word Model Value "অনুভূতিতি", " অনুষ্ঠাতন " CBOW 0.73497474 Skip-Gram 0.23969087 IEEE - 4567010th ICCCNT 2019 July 6-8, 2019, IIT - Kanpur, Kanpur, India Table2: Most Similar word measure Table3: Does not similar word measure Table 4: Related term measure Finally, we visualize the graph of the Bengali words which represent the numerical value of the Bengali text document. When we define tsne set the labels and token of words. In tsne model set the value of perplexity=40, set iteration=2500, init='pca', set n_component=2, and set the figure size=(16,16).For show Bengali word we use Bengali font properties. Figure 4: TSNE visualization for Bengali Word2vec V. CONCLUSION AND FUTURE WORK Word embedding is very important when we working with a text document. All vector carries the vocabulary of a text document. It puts all similar word group in vector space and also measures the similarity of the word with each other. In this paper, we embedding Bengali text which was collected from online. Our collected dataset contains Bengali text and their summary. We were able to create a better Bengali word embedding file for applying dataset. The main limitation of this paper is limited to vocabulary. Because the available dataset is not enough in Bengali Language but our research purpose we collect several data from a different website and social media and embedded them. Another limitation is the sentence structure of Bengali language, it is difficult to accurately divide word from Bengali the sentences. Word2vec is important when working with text such as text summarization. It keeps all similar word in a numeric value which helps when LSTM cell working with important or non-important value. Here we working with a medium dataset for making word embedding in Bengali text and we try to make a good Bengali Word2vec file. Our applying model gives a better output but in future, we want to make a big word embedding file for Bengali text applying a very big dataset and want to improve our Bengali language research resource in natural language processing. VI. ACKNOWLEDGMENT We want to give thanks to our Computer Science and Engineering Department to provide a better facility for research. Specially thanks to our DIU-NLP & ML lab for supporting and helping to complete our research project. REFERENCES [1] W. Ling, C. Dyer, A. Black, I. Trancoso, “Two/Too Simple Adaptations of Word2Vec for Syntax Problems,” in Human Language Technologies: The 2015 Annual Does not Similar word Model Word "আতেশ","রতেতে","হাইত ার্ট" CBOW "আস্ট্রেশ" Skip-Gram "আস্ট্রেশ" Related Term Word and Value " তিন্তা " গঠস্ট্রনর 0.976 েিন 0.974 রেস্ট্রেম্বর 0.973 েুেক</s>
|
<s>0.967 র ালা 0.959 িূলধারায় 0.954 পস্ট্রে 0.953 মশল্পিন্ত্রী 0.952 অম স্ট্র াস্ট্রগর 0.951 জিজ্ঞাোবাস্ট্রের 0.951 Most Similar word Model Word and Value " রতেতে " CBOW 'োস্ট্রিেন্ট'= 0.922 'িালাইকা'=0.907 'পাচার'= 0.899 'প্রস্ট্রবশ'=0.889 'র ান'=0.881 Skip-Gram 'মডস্ট্রেম্বর'= 0.834, 'মডএেইর'= 0.823 'োত'= 0.804 'কযান্টনস্ট্রিন্ট'= 0.804 'রেস্ট্রক'= 0.799 'মডএেইএস্ট্রের'= 0.796 'মবজিএিইএর'= 0.792 'মেনই',=0.792 'োমকেট'= 0.788 'িিা'= 0.785 IEEE - 4567010th ICCCNT 2019 July 6-8, 2019, IIT - Kanpur, Kanpur, India Conference of the North American Chapter of the ACL, pages 1299–1304, Denver, Colorado, June, 2015. [2] D. Zhang, H. Xu, Z. Su, Y. Xu, “Chinese comments sentiment classification based on word2vec and SVM,” in Expert Systems with Applications, Volume 42, Issue 4, Pages 1857-1863, March 2015. [3] J. Lilleberg, Y. Zhu, Y. Zhang, “Support vector machines and Word2vec for text classification with semantic features,” in 2015 IEEE 14th International Conference on Cognitive Informatics & Cognitive Computing, DOI: 10.1109/ICCI-CC.2015.7259377, Beijing, China, September 2015. [4] B. Xue, C. Fu, Z. Shaobin, “A Study on Sentiment Computing and Classification of Sina Weibo with Word2vec,” in 2014 IEEE International Congress on Big Data, DOI: 10.1109/BigData.Congress.2014.59, ISSN: 2379-7703, USA, September 2014. [5] Y. Yao, X. Li, X. Liu, P. Liu, Z. Liang, J. Zhang & K. Mai, “Sensing spatial distribution of urban land use by integrating points-of-interest and Google Word2Vec model,” in International Journal of Geographical Information Science, 31:4, 825-848, DOI: 10.1080/13658816.2016.1244608, October 2016. [6] S. Kottur, R. Vedantam, J. M. F. Moura, D. Parikh, “Visual Word2Vec (vis-w2v): Learning Visually Grounded Word Embeddings Using Abstract Scenes,” in IEEE Conference on Computer Vision and Pattern Recognition, pp. 4985-4994, 2016. [7] L. Ma and Y. Zhang, "Using Word2Vec to process big text data," in 2015 IEEE International Conference on Big Data (Big Data), pp. 2895-2897, DOI: 10.1109/BigData.2015.7364114, Santa Clara, CA, 2015. [8] A. Rexha, M. Kröll, M. Dragoni, R. Kern, “Polarity Classification for Target Phrases in Tweets: A Word2Vec Approach,” in The Semantic Web. ESWC 2016. Lecture Notes in Computer Science, vol 9989. Springer, Cham, 2016. [9] S. Ji, N. Satish, S. Li and P. Dubey, "Parallelizing Word2Vec in Shared and Distributed Memory," in IEEE Transactions on Parallel and Distributed Systems, DOI: 10.1109/TPDS.2019.2904058, 2019. [10] M. Al-Amin, M. S. Islam and S. Das Uzzal, "Sentiment analysis of Bengali comments with Word2Vec and sentiment information of words," in International Conference on Electrical, Computer and Communication Engineering (ECCE), pp. 186-190, DOI: 10.1109/ECACE.2017.7912903, Cox's Bazar, 2017. [11] Abujar S et al (2017) A heuristic approach of text summarization for Bengali documentation. In: 8th IEEE ICCCNT 2017, IIT Delhi, Delhi, India, 3–5 July 2017 [12] Abujar S, Hasan M (2016) A comprehensive text analysis for Bengali TTS using Unicode. In: 5th IEEE international conference on informatics, electronics and vision (ICIEV), Dhaka, Bangladesh, 13–14 May 2016 [13] Abujar S., Hasan M., Hossain S.A. (2019) Sentence Similarity Estimation for Text Summarization Using Deep Learning. In: Kulkarni A., Satapathy S., Kang T., Kashan A. (eds) Proceedings of the 2nd International Conference on Data Engineering and Communication Technology. Advances in Intelligent Systems and Computing, vol 828. Springer,</s>
|
<s>Singapore IEEE - 4567010th ICCCNT 2019 July 6-8, 2019, IIT - Kanpur, Kanpur, India View publication statsView publication statshttps://www.researchgate.net/publication/338358097</s>
|
<s>untitledAutomatic Bengali News Documents Summarization by Introducing Sentence Frequency and Clustering Md. Majharul Haque, Suraiya Pervin Department of Computer Science & Engineering University of Dhaka Dhaka-1000, Bangladesh Email: mazharul_13@yahoo.com, suraiya@univdhaka.edu Zerina Begum Institute of Information Technology University of Dhaka Dhaka-1000, Bangladesh Email: zerin@iit.du.ac.bd Abstract—A method has been proposed in this paper for Bengali news documents summarization which extracts significant sentences using the four major steps (a) preprocessing, (b) sentence ranking, (c) sentence clustering, and (d) summary generation. The noticeable feature of this method is the incorporation of the sentence frequency where redundancy elimination is a consequence. Another one remarkable aspect is sentence clustering on the basis of similarity ratio among sentences. The summary sentence selection is done from all the clusters so that there will be maximum coverage of information in summary even if information is found scattered in input document. Two sets of human generated summary have been utilized where one is to train the system and another is for performance evaluation. The proposed method has been found better while turning comparison with the latest state-of-the art method of Bengali news documents summarization. The results of performance evaluation show that the average Precision, Recall and F-measure values are 0.608, 0.664 and 0.632 respectively. Keywords—Documents summarization; sentence clustering; sentence frequency; redundancy elimination; similarity ratio I. INTRODUCTION The amount of available information increases rapidly with the development of information technology and wide use of Internet [1] for which a new era of information explosion is impending. The estimated size of the web in 2013 was around 3.82 billion pages [2] and this number is growing every day at a fast pace [3]. So the automatic summarization is needed to process the Internet data efficiently, scavenging useful information from it [3]. The goal of automatic text summarization is to condense the source text into a shorter version with preserving its information content and overall meaning [4, 5]. To this age, numerous types of research work have been accomplished by various researchers where we can be familiar with multiple ways of summary generation for English language [6]. But, a few works exist for Bengali language where most of them are somehow works on directly term-frequency and some other statistical measures [7]. However, a purely statistical method of producing extracts was suspected of being inadequate, and hence other methods were sought [8]. Again the term frequency based method will only select the sentences that contain frequent terms and the summary will contain similar sentences only. But, it may be happened that some significant sentences may exist in the document that will not contain frequent terms for which they will be discarded from selection. In these regard, a method has been proposed here for Bengali news documents summarization with the following major contributions for which this method is different from others: • Along with term frequency, this method introducessentence frequency while sentence ranking. If onesentence contains 60% terms of any other, smallersentence is removed and the frequency of largersentence is increased between them.• Sentences are</s>
|
<s>clustered in different groups and selectedfrom all clusters based on their volume. This featurewill maximize the information coverage because of theparticipation of all the clusters in summary preparation.The concept of clustering has already been incorporated in English text summarization [9] but in this proposed method the way of utilization with the similarity ratio adjustment is unique. Moreover, this method has applied sentence clustering for Bengali text summarization in the first time. The rest of the paper is organized as follows: Section II describes related works about Bengali text summarization. Section III illustrates proposed methodology in details. Experiment and evaluation with the discussion on results are depicted in section IV. Finally, the conclusion is turned in section V with future works. II. RELATED WORKThe voice-over automatic i.e. computerized abstraction began around five decades ago by H. P. Luhn [10] in 1958 on the basis of term-frequency and it was first extended by P. B. Baxendale [11] by incorporating position of sentences and cue-phrases. Since then, the field of text summarization has witnessed continuous involvement of many researchers in the attempt to look for different strategies [5, 12]. H. P. Edmundson [13] in 1969 accomplished a notable progress by integrating title method, cue-phrase method and location method. But, the noticeable point is that most of these research works have been conducted for English. Few attempts have been reported for Bengali language though it is the 7th international language in the world and mother language for Bangladesh [14]. It is also noticeable that online Bengali text is also increasing rapidly and there are a number of online 156newspapers such as “The Daily Prothom alo”, “Anandabazar Patrika”, “The Daily Jugantor”, etc. In this regard, the automatic Bengali news documents summarization is out of question. In 2004, Islam and Masum [15] presented ‘Bhasa’, a corpus oriented search engine and summarizer. It performs document indexing and retrieves information based on key words using vector space retrieval method for Unicode based Bengali text. Corpus files can be ranked and summarized by this method as per the frequent appearance of query terms. A tokenizer has been used here which is able to determine different terms, abbreviations, tags and boundary of sentences, headings, titles and sentences using markups by semantic and syntactic analysis. In 2007, Md. Nizam Uddin and Shakil Akter Khan [16] accomplished a survey on English text summarization system and implemented some existing features to summarize Bengali text. The features are as follows: i) location method, ii) cue method, iii) title method, iv) term frequency, and v) numerical data. They have taken 40% higher ranked sentences from the input document as summary. It has been found that 40% extract by this system has got the point 8.4 from human in the range of 0 to 10. In 2010, Amitava Das and Sivaji Bandyopadhyay [17] offered a method for opinion summarization in Bengali. They have utilized subjectivity classifier [18] to determine subjective or factual sentences or documents for opinion mining. The system identifies the sentiment information in each</s>
|
<s>document, aggregates them and represents the summary information in text. The aggregation is performed by using k-means approach and candidate summary sentences are selected by applying theme relational graph model. Standard page rank algorithm has been used here. In evaluation, the Precision, Recall and F-Score of this approach are calculated as 72.15%, 67.32% and 69.65% respectively. This system is mostly works on theme detection. Somewhere, same procedure as like English text summarizer has been followed for Bengali language as proposed by Kamal Sarkar [7] in 2012. This is an easy-to-implement approach as like the method of Edmandson [13]. It has three major steps: (1) preprocessing, (2) sentence ranking, and (3) summary generation. This method is based on word-frequency, length of sentence and position of sentences for sentence scoring. The evaluation of this method has shown that average unigram based recall score is 0.4122. In 2012, Kamal Sarkar [19] proposed another one method in the aim to provide an idea about the theme of a document without revealing the in-depth detail. This approach has four major steps (1) preprocessing, (2) extraction of candidate summary sentences, (3) ranking the candidate summary sentences, and (4) summary generation. This is also based on word-frequency, sentence position and sentence length that is similar to [7]. It was claimed that the features have been used here in more effective way for news documents summarization than [7]. Evaluation results showed that this system performs better than the lead baseline, baseline that uses term-frequency with position features and the method described in [7]. To the best of our knowledge, the method described in [19] is comparatively latest and better than any other state-of-the art methods for news documents summarization for Bengali language. In the evaluation section, the proposed method in this paper has been compared with the method described in [19]. III. PROPOSED METHODOLOGY The proposed text summarization approach is described in the four steps as follows: A. Preprocessing At the preprocessing step, stop words such as “e”, “e ”, “ ”, etc. are removed as per the list of Bengali stop words [20]. Word stemming is applied to map the words with different endings to a single one such as "g ", "g " will be "g ". For stemming procedure, lightweight stemmer for Bengali has been used that strips the suffixes using a predefined suffix list on a “longest match” basis [21]. For some words that cannot be stemmed with suffix stripping rules such as “ ”, look up table has been used as in [22] to get the root form. Bengali is very inflectional language for which stemming is required for calculating term frequency. After stemming, term frequency for all terms is computed and the entire document is segmented to sentences. As per the analysis, it has been found that the sentences with length of less than or equal to 4 are very rare to be in summary, so they are deleted [19]. B. Sentence Ranking For sentence scoring, values of some attributes are calculated</s>
|
<s>for all the sentences at first and then sum-up all the attributes’ value to compute the score of each sentence. Three attributes are considered in this method as follows: 1) term frequency calculation for each sentence, 2) sentence frequency, and 3) existence of numerical data. 1) Term frequency calculation for each sentence (STF): It is well known that term frequency (TF) is the number of appearance of any term. It is estimated as follows: (1) First attribute as term frequency for one sentence (STF) is calculated by summing up the TF of all the terms exist in the sentence. 2) Sentence frequency: Along with term frequency, this proposed methodology has introduced second attribute as sentence frequency (SSF) which is based on the ratio of term overlapping. In this method, all the sentences are set to frequency 1 at first. If one sentence contains 60% or above terms of any other, the smaller sentence is removed and the frequency larger sentence will be computed as the summation of the frequency of both of the sentences. The containing ratio 60% is considered based on the threshold value of cosine similarity ratio [23]. 1573) Existence of numerical data: The third attribute is to count numerical data in each sentence (SNc). The value of SNc for each sentence is set to 0 (zero) at first and for the existance of each numerical data it will be incremented by 1. After measuring all the attributes, the score of each sentence is computed as (2) where Sk is the score of Kth sentence: Sk = STF + SSF + SNc (2) C. Sentence clustering All the significant sentences may not contain frequent terms and not be similar to a central theme of the input document. So, coverage of information may be failed by applying the sentence selection process directly to the whole document. By considering this issue, it is proposed here that the sentences are clustered as per their cosine similarity ratio at first. If cosine similarity ratio among two or more sentences is equal to a minimum threshold point or above, they will be in a single cluster. This threshold point is adjusted on the training corpus for better result through experiments. The average F-measure for summarization performance is computed for the summaries of 15 test documents while clustering with different threshold points. The fig. 1 shows F-measure value by clustering from 0.00 to 0.59 similarity ratio (SR) and in each time the SR is incremented by 0.01. It is found that SR equal to 0.09 is the best minimum threshold point. The range of SR is selected here from 0.00 to 0.59 for clustering. Because, 0.00 means no clustering and 0.59 is the highest limit of clustering as more than 0.59 SR is assumed as repetition while sentence frequency calculation. Actually in different threshold point, various numbers of clusters are generated for various documents for which the performance of summarization is varied. It has been found that the average value of F-measure is highest</s>
|
<s>with the number of clusters constructed for SR equal to 0.09. In this way all the sentences are clustered. But some sentences are there with no similarities with any other and keep these sentences to another one cluster. From the fig. 1, it has been found that the performance of clustering by the SR equal to 0.0 and from 0.50 to 0.59 are almost similar because most of the sentences are found with no clustering with SR from 0.50 to 0.59 and tends to be as like SR 0.00 or no clustering. D. Summary generation After clustering, sentences are selected from each cluster based on the volume of cluster. For example, if there are N sentences in the whole document and one cluster has C sentences and if summary will be of S sentences, the number of top scored sentences from each cluster (Ns) are selected as follows: (3) The number of summary sentences is kept as approximately one third of the total sentences according to the ratio of source document to summary mentioned in [24]. Now, all the selected sentences are ordered as per their order of appearance in the original document to display the final summary. Fig. 1. Effect on summarization performance when the similarity ratio for clustering is varied. Fig. 2. Process flow of the proposed methodology. At a glance process flow of the proposed method is given in the above fig. 2. IV. EVALUATION AND DISCUSSION ON RESULTS For the evaluation of the proposed methodology, 40 news articles have been collected as test corpus from Bengali daily newspaper “The Daily Prothom alo” and “The Daily Jugantor”. From the test corpus, 20 news articles have been selected randomly. Three human (graduated in Computer Science and Engineering) generated summaries for each article and these summaries are considered as reference/model summaries. From these 20 document-summaries, randomly selected 15 document-summaries are considered as training set for adjusting the value of similarity ratio (SR) in the previous section. Other 5 document-summaries pairs are used for the evaluation of this proposed system. A. Evaluation Evaluating the quality of a summary is a difficult problem, principally because there is no obvious “ideal” summary and for relatively straightforward news articles, human summarizers tend to agree only approximately 60% content 158overlapping [24]. Even, summary generated by same person may be varied in different times for same article. In this regard, the summary of proposed system has been compared with three model summaries of 5 news articles and the results of evaluation is the average results of the comparisons. Precision, Recall and F-measure are brought into play here as these have long used as important evaluation matrices in information retrieval field [25]. If ‘A’ indicates the number of sentences retrieved by summarizer and ‘B’ indicates the number of sentences that are relevant as compared to target set, Precision, Recall and F-measure are computed based on the following equations: Precision (P) = (4) Recall (R) = (5) F-measure = (6) B. Experiments and results The proposed method</s>
|
<s>has been implemented along with an existing Bengali text summarization method [19] with a server side scripting language named Hypertext Preprocessor (PHP). To judge the effectiveness of the proposed method, experiments have been conducted on several news articles. In each time, the system generated summary is compared with three model summaries of each article and compute the average value of Precision, Recall and F-measure through (4), (5) and (6) respectively. The proposed system has also been compared with an existing Bengali news documents summarization method [19] which was claimed to be better than two baseline systems and another one system illustrated in [7]. Point to be mentioned that same documents with corresponding model summaries have been utilized to calculate Precision, Recall and F-measure for the proposed method and the existing method of [19]. The results of evaluation and comparison have been depicted in the table I and II respectively. As per the results in table I and II, the proposed method has shown promising outcome. It can be said that the proposed method has become more efficient for using some distinguished features such as sentence frequency, sentence clustering and considering the existence of numerical data in each sentence. A model summary and a system generated summary have been given for example in the fig. 3 and fig. 4 respectively. In this regard, the input article is “ n, ” has been taken from a Bengali newspaper “The Daily Prothom-alo”. TABLE I. EVALUATION OF THE PROPOSED SYSTEM Articles P R F-measure Article 1 0.570 0.600 0.590 Article 2 0.780 0.780 0.780 Article 3 0.500 0.670 0.570 Article 4 0.520 0.600 0.550 Article 5 0.670 0.670 0.670 Average 0.608 0.664 0.632 TABLE II. COMPARISON OF THE PROPOSED SYSTEM Methods P R F-measure Proposed system 0.608 0.664 0.632 Existing system [19] 0.538 0.556 0.546 Fig. 3. Model summary generated by human Fig. 4. System generated summary V. CONCLUSION AND FUTURE WORKS In this paper, a method for summarizing news documents for Bengali language has been proposed by introducing sentence frequency and clustering. A review study has also been portrayed to enumerate the basement of the exploration of automatic text abridgement for Bengali language. As per the results of evaluation, it can be said that the proposed system will help in getting precise information within a comparatively short time. In future, we hope to introduce more features for sentence ranking. It is also expected to extend the proposed scheme to construct portable and language independent text summarization procedure. ACKNOWLEDGMENTS This research work is funded by a Fellowship Scholarship from Information and Communication Technology Division, Government of the People’s Republic of Bangladesh. There is also a valuable support from the Central Bank of Bangladesh. REFERENCES [1] Dongmei Ai, Yuchao Zheng, and Dezheng Zhang, “Automatic text summarization based on latent semantic indexing,” Journal of Artificial Life and Robotics, Springer, vol. 15, issue 1, pp 25-29, August 2010. [2] Kunder, M., “The size of the world wide web,” online available at: www.worldwidewebsize.com/? (last accessed February-2014). 159[3] Rafael</s>
|
<s>Ferreira and Luciano de souza, “A multi-document summarization system based on statistics and linguistic treatment,” Journal of Expert Systems with Applications, Elsevier, vol. 41, issue 13, pp. 5780-5787, 1st October 2014. [4] Yogan Jaya Kumar and Naomie Salim, “Automatic Multi Document Summarization Approaches,” Journal of Computer Science, vol. 8, issue. 1, pp. 133-140, 2012. [5] V. Gupta and G. S. Lehal, “A Survey of Text Summarization Extractive Techniques,” Journal of Emerging Technologies in Web Intelligence, vol. 2, no. 3, pp. 258-268, August 2010. [6] Md. Majharul Haque, Suraiya Pervin and Zerina Begum, “Literature Review of Automatic Single Document Text Summarization Using NLP,” International Journal of Innovation and Applied Studies, vol. 3, no. 3, pp. 857-865, July 2013. [7] K. Sarkar, “An approach to summarizing Bengali news documents,” In proceedings of the International Conference on Advances in Computing, Communications and Informatics, ACM, pp. 857-862, 2012. [8] G. J. Rath, A. Resnick and T. R. Savage, “Comparisons of four types of lexical indicators of content,” Journal of the American Society for Information Science and Technology, vol. 12, no. 2, pp. 126-130, April 1961. [9] ZHANG Pei-ying and LI Cun-he, “Automatic text summarization based on sentences clustering and extraction,” 2nd IEEE International Conference on Computer Science and Information Technology, Beijing, pp. 167-170, August 2009. [10] Hans P. Luhn, “The Automatic Creation of Literature Abstracts,” IBM Journal of Research and Development, vol. 2, no. 2, pp. 159-165, 1958. [11] P. B. Baxendale, “Machine-made Index for Technical Literature -An Experiment,” IBM Journal of Research and Development, vol. 2, no. 4, pp. 354-361, October 1958. [12] H. Saggion and T. Poibeau, “Automatic Text Summarization: Past, Present and Future,” Multi-source, Multilingual Information Extraction and Summarization, Springer-Verlag, Berlin, Heidelberg, pp. 3-21, 2013. [13] H. P. Edmundson, “New Methods in Automatic Extracting,” Journal of the Association for Computing Machinery, vol. 16, no. 2, pp. 264-285, April 1969. [14] “Banglapedia, the national Encyclopedia of Bangladesh”, Asiatic Society of Bangladesh, Dhaka, 2003. [15] Md Tawhidul Islam and Shaikh Mostafa Al Masum, “Bhasa: A Corpus-Based Information Retrieval and Summariser for Bengali Text,” In Proceedings of the 7th International Conference on Computer and Information Technology, 2004. [16] Md. Nizam Uddin, Shakil Akter Khan, “A Study on Text Summarization Techniques and Implement Few of Them for Bangla Language,” 10th International conference on Computer and Information technology, IEEE, pp. 1-4, 2007. [17] Amitava Das and Sivaji Bandyopadhyay, “Topic-Based Bengali Opinion Summarization”, International Conference COILING ’10, Beijing, pp. 232-240, 2010. [18] Amitava Das and Sivaji Bandyopadhyay, “Subjectivity Detection in English and Bengali: A CRF-based Approach,” In Proceeding of ICON, Hyderabad, 14th-17th December 2009. [19] K. Sarkar, “An approach to summarizing Bengali news documents,” In proceedings of the International Conference on Advances in Computing, Communications and Informatics, ACM, pp. 857-862, 2012. [20] List of stop words for Bengali language. Online available at: http://www.isical.ac.in/~fire/stopwords_list_ben.txt (last accessed 12 July-2015). [21] Md. Zahurul Islam, Md. Nizam Uddin, and Mumit Khan, “A light weight stemmer for Bengali and its Use in spelling Checker,” Center for research on Bangla language processing (CRBLP),</s>
|
<s>2007. [22] Md. Redowan Mahmud, Mahbuba Afrin, Md. Abdur Razzaque , Ellis Miller, and Joel Iwashige, “A Rule Based Bengali Stemmer,” International Conference on Advances in Computing, Communications and Informatics (ICACCI, 2014. [23] L. Gravano, P. G. Ipeirotis, H. V. Jagadish, N. Koudas, S. Muthukrishnan, L. Pietarinen, and D. Srivastava, “Using q-grams in a DBMS for approximate string processing,” IEEE Data Engineering Bulletin, vol. 24, issue 4, pp. 28-34, 2001. [24] Dragomir R. Radev, Eduard Hovy and Kathleen McKeown, “Introduction to the special issue on summarization,” Journal of Computational Linguistics, MIT Press, vol. 28, no. 4, pp. 399-408, December 2002. [25] Shanmugasundaram Hariharan, Thirunavukarasu Ramkumar and Rengaramanujam Srinivasan, “Enhanced Graph Based Approach for Multi Document Summarization,” The International Arab Journal of Information Technology, vol. 10, no. 4, July 2013. 160 /ASCII85EncodePages false /AllowTransparency false /AutoPositionEPSFiles false /AutoRotatePages /None /Binding /Left /CalGrayProfile (Gray Gamma 2.2) /CalRGBProfile (sRGB IEC61966-2.1) /CalCMYKProfile (U.S. Web Coated \050SWOP\051 v2) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Warning /CompatibilityLevel 1.4 /CompressObjects /Off /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages true /CreateJobTicket false /DefaultRenderingIntent /Default /DetectBlends true /DetectCurves 0.0000 /ColorConversionStrategy /LeaveColorUnchanged /DoThumbnails false /EmbedAllFonts true /EmbedOpenType false /ParseICCProfilesInComments true /EmbedJobOptions true /DSCReportingLevel 0 /EmitDSCWarnings false /EndPage -1 /ImageMemory 1048576 /LockDistillerParams true /MaxSubsetPct 100 /Optimize false /OPM 0 /ParseDSCComments false /ParseDSCCommentsForDocInfo false /PreserveCopyPage true /PreserveDICMYKValues true /PreserveEPSInfo false /PreserveFlatness true /PreserveHalftoneInfo true /PreserveOPIComments false /PreserveOverprintSettings true /StartPage 1 /SubsetFonts false /TransferFunctionInfo /Remove /UCRandBGInfo /Preserve /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true /Arial-Black /Arial-BoldItalicMT /Arial-BoldMT /Arial-ItalicMT /ArialMT /ArialNarrow /ArialNarrow-Bold /ArialNarrow-BoldItalic /ArialNarrow-Italic /ArialUnicodeMS /BookAntiqua /BookAntiqua-Bold /BookAntiqua-BoldItalic /BookAntiqua-Italic /BookmanOldStyle /BookmanOldStyle-Bold /BookmanOldStyle-BoldItalic /BookmanOldStyle-Italic /BookshelfSymbolSeven /Century /CenturyGothic /CenturyGothic-Bold /CenturyGothic-BoldItalic /CenturyGothic-Italic /CenturySchoolbook /CenturySchoolbook-Bold /CenturySchoolbook-BoldItalic /CenturySchoolbook-Italic /ComicSansMS /ComicSansMS-Bold /CourierNewPS-BoldItalicMT /CourierNewPS-BoldMT /CourierNewPS-ItalicMT /CourierNewPSMT /EstrangeloEdessa /FranklinGothic-Medium /FranklinGothic-MediumItalic /Garamond /Garamond-Bold /Garamond-Italic /Gautami /Georgia /Georgia-Bold /Georgia-BoldItalic /Georgia-Italic /Haettenschweiler /Impact /Kartika /Latha /LetterGothicMT /LetterGothicMT-Bold /LetterGothicMT-BoldOblique /LetterGothicMT-Oblique /LucidaConsole /LucidaSans /LucidaSans-Demi /LucidaSans-DemiItalic /LucidaSans-Italic /LucidaSansUnicode /Mangal-Regular /MicrosoftSansSerif /MonotypeCorsiva /MSReferenceSansSerif /MSReferenceSpecialty /MVBoli /PalatinoLinotype-Bold /PalatinoLinotype-BoldItalic /PalatinoLinotype-Italic /PalatinoLinotype-Roman /Raavi /Shruti /Sylfaen /SymbolMT /Tahoma /Tahoma-Bold /TimesNewRomanMT-ExtraBold /TimesNewRomanPS-BoldItalicMT /TimesNewRomanPS-BoldMT /TimesNewRomanPS-ItalicMT /TimesNewRomanPSMT /Trebuchet-BoldItalic /TrebuchetMS /TrebuchetMS-Bold /TrebuchetMS-Italic /Tunga-Regular /Verdana /Verdana-Bold /Verdana-BoldItalic /Verdana-Italic /Vrinda /Webdings /Wingdings2 /Wingdings3 /Wingdings-Regular /ZWAdobeF /NeverEmbed [ true /AntiAliasColorImages false /CropColorImages true /ColorImageMinResolution 200 /ColorImageMinResolutionPolicy /OK /DownsampleColorImages true /ColorImageDownsampleType /Bicubic /ColorImageResolution 300 /ColorImageDepth -1 /ColorImageMinDownsampleDepth 1 /ColorImageDownsampleThreshold 1.50000 /EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages false /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /ColorImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 200 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 1.50000 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages false /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /GrayImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 400 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 600 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.50000 /EncodeMonoImages true /MonoImageFilter</s>
|
<s>/CCITTFaxEncode /MonoImageDict << /K -1 /AllowPSXObjects false /CheckCompliance [ /None /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXOutputIntentProfile (None) /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False /CreateJDFFile false /Description << /CHS <FEFF4f7f75288fd94e9b8bbe5b9a521b5efa7684002000410064006f006200650020005000440046002065876863900275284e8e55464e1a65876863768467e5770b548c62535370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c676562535f00521b5efa768400200050004400460020658768633002> /CHT <FEFF4f7f752890194e9b8a2d7f6e5efa7acb7684002000410064006f006200650020005000440046002065874ef69069752865bc666e901a554652d965874ef6768467e5770b548c52175370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c4f86958b555f5df25efa7acb76840020005000440046002065874ef63002> /DAN <FEFF004200720075006700200069006e0064007300740069006c006c0069006e006700650072006e0065002000740069006c0020006100740020006f007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400650072002c0020006400650072002000650067006e006500720020007300690067002000740069006c00200064006500740061006c006a006500720065007400200073006b00e60072006d007600690073006e0069006e00670020006f00670020007500640073006b007200690076006e0069006e006700200061006600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020004400650020006f007000720065007400740065006400650020005000440046002d0064006f006b0075006d0065006e0074006500720020006b0061006e002000e50062006e00650073002000690020004100630072006f00620061007400200065006c006c006500720020004100630072006f006200610074002000520065006100640065007200200035002e00300020006f00670020006e0079006500720065002e> /DEU <FEFF00560065007200770065006e00640065006e0020005300690065002000640069006500730065002000450069006e007300740065006c006c0075006e00670065006e0020007a0075006d002000450072007300740065006c006c0065006e00200076006f006e002000410064006f006200650020005000440046002d0044006f006b0075006d0065006e00740065006e002c00200075006d002000650069006e00650020007a0075007600650072006c00e40073007300690067006500200041006e007a006500690067006500200075006e00640020004100750073006700610062006500200076006f006e00200047006500730063006800e40066007400730064006f006b0075006d0065006e00740065006e0020007a0075002000650072007a00690065006c0065006e002e00200044006900650020005000440046002d0044006f006b0075006d0065006e007400650020006b00f6006e006e0065006e0020006d006900740020004100630072006f00620061007400200075006e0064002000520065006100640065007200200035002e003000200075006e00640020006800f600680065007200200067006500f600660066006e00650074002000770065007200640065006e002e> /ESP <FEFF005500740069006c0069006300650020006500730074006100200063006f006e0066006900670075007200610063006900f3006e0020007000610072006100200063007200650061007200200064006f00630075006d0065006e0074006f0073002000640065002000410064006f00620065002000500044004600200061006400650063007500610064006f007300200070006100720061002000760069007300750061006c0069007a00610063006900f3006e0020006500200069006d0070007200650073006900f3006e00200064006500200063006f006e006600690061006e007a006100200064006500200064006f00630075006d0065006e0074006f007300200063006f006d00650072006300690061006c00650073002e002000530065002000700075006500640065006e00200061006200720069007200200064006f00630075006d0065006e0074006f00730020005000440046002000630072006500610064006f007300200063006f006e0020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e003000200079002000760065007200730069006f006e0065007300200070006f00730074006500720069006f007200650073002e> /FRA <FEFF005500740069006c006900730065007a00200063006500730020006f007000740069006f006e00730020006100660069006e00200064006500200063007200e900650072002000640065007300200064006f00630075006d0065006e00740073002000410064006f006200650020005000440046002000700072006f00660065007300730069006f006e006e0065006c007300200066006900610062006c0065007300200070006f007500720020006c0061002000760069007300750061006c00690073006100740069006f006e0020006500740020006c00270069006d007000720065007300730069006f006e002e0020004c0065007300200064006f00630075006d0065006e00740073002000500044004600200063007200e900e90073002000700065007500760065006e0074002000ea0074007200650020006f007500760065007200740073002000640061006e00730020004100630072006f006200610074002c002000610069006e00730069002000710075002700410064006f00620065002000520065006100640065007200200035002e0030002000650074002000760065007200730069006f006e007300200075006c007400e90072006900650075007200650073002e> /ITA (Utilizzare queste impostazioni per creare documenti Adobe PDF adatti per visualizzare e stampare documenti aziendali in modo affidabile. I documenti PDF creati possono essere aperti con Acrobat e Adobe Reader 5.0 e versioni successive.) /JPN <FEFF30d330b830cd30b9658766f8306e8868793a304a3088307353705237306b90693057305f002000410064006f0062006500200050004400460020658766f8306e4f5c6210306b4f7f75283057307e305930023053306e8a2d5b9a30674f5c62103055308c305f0020005000440046002030d530a130a430eb306f3001004100630072006f0062006100740020304a30883073002000410064006f00620065002000520065006100640065007200200035002e003000204ee5964d3067958b304f30533068304c3067304d307e305930023053306e8a2d5b9a3067306f30d530a930f330c8306e57cb30818fbc307f3092884c3044307e30593002> /KOR <FEFFc7740020c124c815c7440020c0acc6a9d558c5ec0020be44c988b2c8c2a40020bb38c11cb97c0020c548c815c801c73cb85c0020bcf4ace00020c778c1c4d558b2940020b3700020ac00c7a50020c801d569d55c002000410064006f0062006500200050004400460020bb38c11cb97c0020c791c131d569b2c8b2e4002e0020c774b807ac8c0020c791c131b41c00200050004400460020bb38c11cb2940020004100630072006f0062006100740020bc0f002000410064006f00620065002000520065006100640065007200200035002e00300020c774c0c1c5d0c11c0020c5f40020c2180020c788c2b5b2c8b2e4002e> /NLD (Gebruik deze instellingen om Adobe PDF-documenten te maken waarmee zakelijke documenten betrouwbaar kunnen worden weergegeven en afgedrukt. De gemaakte PDF-documenten kunnen worden geopend met Acrobat en Adobe Reader 5.0 en hoger.) /NOR <FEFF004200720075006b00200064006900730073006500200069006e006e007300740069006c006c0069006e00670065006e0065002000740069006c002000e50020006f0070007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e00740065007200200073006f006d002000650072002000650067006e0065007400200066006f00720020007000e5006c006900740065006c006900670020007600690073006e0069006e00670020006f00670020007500740073006b007200690066007400200061007600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020005000440046002d0064006f006b0075006d0065006e00740065006e00650020006b0061006e002000e50070006e00650073002000690020004100630072006f00620061007400200065006c006c00650072002000410064006f00620065002000520065006100640065007200200035002e003000200065006c006c00650072002e> /PTB <FEFF005500740069006c0069007a006500200065007300730061007300200063006f006e00660069006700750072006100e700f50065007300200064006500200066006f0072006d00610020006100200063007200690061007200200064006f00630075006d0065006e0074006f0073002000410064006f00620065002000500044004600200061006400650071007500610064006f00730020007000610072006100200061002000760069007300750061006c0069007a006100e700e3006f002000650020006100200069006d0070007200650073007300e3006f00200063006f006e0066006900e1007600650069007300200064006500200064006f00630075006d0065006e0074006f007300200063006f006d0065007200630069006100690073002e0020004f007300200064006f00630075006d0065006e0074006f00730020005000440046002000630072006900610064006f007300200070006f00640065006d0020007300650072002000610062006500720074006f007300200063006f006d0020006f0020004100630072006f006200610074002000650020006f002000410064006f00620065002000520065006100640065007200200035002e0030002000650020007600650072007300f50065007300200070006f00730074006500720069006f007200650073002e> /SUO <FEFF004b00e40079007400e40020006e00e40069007400e4002000610073006500740075006b007300690061002c0020006b0075006e0020006c0075006f0074002000410064006f0062006500200050004400460020002d0064006f006b0075006d0065006e007400740065006a0061002c0020006a006f0074006b006100200073006f0070006900760061007400200079007200690074007900730061007300690061006b00690072006a006f006a0065006e0020006c0075006f00740065007400740061007600610061006e0020006e00e400790074007400e4006d0069007300650065006e0020006a0061002000740075006c006f007300740061006d0069007300650065006e002e0020004c0075006f0064007500740020005000440046002d0064006f006b0075006d0065006e00740069007400200076006f0069006400610061006e0020006100760061007400610020004100630072006f0062006100740069006c006c00610020006a0061002000410064006f00620065002000520065006100640065007200200035002e0030003a006c006c00610020006a006100200075007500640065006d006d0069006c006c0061002e> /SVE <FEFF0041006e007600e4006e00640020006400650020006800e4007200200069006e0073007400e4006c006c006e0069006e006700610072006e00610020006f006d002000640075002000760069006c006c00200073006b006100700061002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400200073006f006d00200070006100730073006100720020006600f60072002000740069006c006c006600f60072006c00690074006c006900670020007600690073006e0069006e00670020006f006300680020007500740073006b007200690066007400650072002000610076002000610066006600e4007200730064006f006b0075006d0065006e0074002e002000200053006b006100700061006400650020005000440046002d0064006f006b0075006d0065006e00740020006b0061006e002000f600700070006e00610073002000690020004100630072006f0062006100740020006f00630068002000410064006f00620065002000520065006100640065007200200035002e00300020006f00630068002000730065006e006100720065002e> /ENU (Use these settings to create PDFs that match the "Required" settings for PDF Specification 4.01)>> setdistillerparams /HWResolution [600 600] /PageSize [612.000 792.000]>> setpagedevice</s>
|
<s>See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/315867538An extractive text summarization technique for Bengali document(s) using K-means clustering algorithmConference Paper · January 2017DOI: 10.1109/ICIVPR.2017.7890883CITATIONSREADS3876 authors, including:Some of the authors of this publication are also working on these related projects:Hyperspectral Image Classification View projectService mobility support distributed cloud technology View projectSumya AkterHajee Mohammad Danesh Science and Technology University6 PUBLICATIONS 24 CITATIONS SEE PROFILEAysa Siddika AsaHajee Mohammad Danesh Science and Technology University, Dinajpur4 PUBLICATIONS 23 CITATIONS SEE PROFILEMd. Palash UddinDeakin University51 PUBLICATIONS 227 CITATIONS SEE PROFILEMd. Delowar HossainKyung Hee University27 PUBLICATIONS 124 CITATIONS SEE PROFILEAll content following this page was uploaded by Md. Palash Uddin on 13 May 2018.The user has requested enhancement of the downloaded file.https://www.researchgate.net/publication/315867538_An_extractive_text_summarization_technique_for_Bengali_documents_using_K-means_clustering_algorithm?enrichId=rgreq-7c98d58c8180681e5ca58020228c171a-XXX&enrichSource=Y292ZXJQYWdlOzMxNTg2NzUzODtBUzo2MjU4OTM5MTU1MDg3MzhAMTUyNjIzNjE0NjgyNA%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/315867538_An_extractive_text_summarization_technique_for_Bengali_documents_using_K-means_clustering_algorithm?enrichId=rgreq-7c98d58c8180681e5ca58020228c171a-XXX&enrichSource=Y292ZXJQYWdlOzMxNTg2NzUzODtBUzo2MjU4OTM5MTU1MDg3MzhAMTUyNjIzNjE0NjgyNA%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Hyperspectral-Image-Classification-9?enrichId=rgreq-7c98d58c8180681e5ca58020228c171a-XXX&enrichSource=Y292ZXJQYWdlOzMxNTg2NzUzODtBUzo2MjU4OTM5MTU1MDg3MzhAMTUyNjIzNjE0NjgyNA%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Service-mobility-support-distributed-cloud-technology-2?enrichId=rgreq-7c98d58c8180681e5ca58020228c171a-XXX&enrichSource=Y292ZXJQYWdlOzMxNTg2NzUzODtBUzo2MjU4OTM5MTU1MDg3MzhAMTUyNjIzNjE0NjgyNA%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-7c98d58c8180681e5ca58020228c171a-XXX&enrichSource=Y292ZXJQYWdlOzMxNTg2NzUzODtBUzo2MjU4OTM5MTU1MDg3MzhAMTUyNjIzNjE0NjgyNA%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Sumya_Akter?enrichId=rgreq-7c98d58c8180681e5ca58020228c171a-XXX&enrichSource=Y292ZXJQYWdlOzMxNTg2NzUzODtBUzo2MjU4OTM5MTU1MDg3MzhAMTUyNjIzNjE0NjgyNA%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Sumya_Akter?enrichId=rgreq-7c98d58c8180681e5ca58020228c171a-XXX&enrichSource=Y292ZXJQYWdlOzMxNTg2NzUzODtBUzo2MjU4OTM5MTU1MDg3MzhAMTUyNjIzNjE0NjgyNA%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Hajee_Mohammad_Danesh_Science_and_Technology_University?enrichId=rgreq-7c98d58c8180681e5ca58020228c171a-XXX&enrichSource=Y292ZXJQYWdlOzMxNTg2NzUzODtBUzo2MjU4OTM5MTU1MDg3MzhAMTUyNjIzNjE0NjgyNA%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Sumya_Akter?enrichId=rgreq-7c98d58c8180681e5ca58020228c171a-XXX&enrichSource=Y292ZXJQYWdlOzMxNTg2NzUzODtBUzo2MjU4OTM5MTU1MDg3MzhAMTUyNjIzNjE0NjgyNA%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Aysa_Asa?enrichId=rgreq-7c98d58c8180681e5ca58020228c171a-XXX&enrichSource=Y292ZXJQYWdlOzMxNTg2NzUzODtBUzo2MjU4OTM5MTU1MDg3MzhAMTUyNjIzNjE0NjgyNA%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Aysa_Asa?enrichId=rgreq-7c98d58c8180681e5ca58020228c171a-XXX&enrichSource=Y292ZXJQYWdlOzMxNTg2NzUzODtBUzo2MjU4OTM5MTU1MDg3MzhAMTUyNjIzNjE0NjgyNA%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Aysa_Asa?enrichId=rgreq-7c98d58c8180681e5ca58020228c171a-XXX&enrichSource=Y292ZXJQYWdlOzMxNTg2NzUzODtBUzo2MjU4OTM5MTU1MDg3MzhAMTUyNjIzNjE0NjgyNA%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Palash_Uddin?enrichId=rgreq-7c98d58c8180681e5ca58020228c171a-XXX&enrichSource=Y292ZXJQYWdlOzMxNTg2NzUzODtBUzo2MjU4OTM5MTU1MDg3MzhAMTUyNjIzNjE0NjgyNA%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Palash_Uddin?enrichId=rgreq-7c98d58c8180681e5ca58020228c171a-XXX&enrichSource=Y292ZXJQYWdlOzMxNTg2NzUzODtBUzo2MjU4OTM5MTU1MDg3MzhAMTUyNjIzNjE0NjgyNA%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Deakin_University?enrichId=rgreq-7c98d58c8180681e5ca58020228c171a-XXX&enrichSource=Y292ZXJQYWdlOzMxNTg2NzUzODtBUzo2MjU4OTM5MTU1MDg3MzhAMTUyNjIzNjE0NjgyNA%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Palash_Uddin?enrichId=rgreq-7c98d58c8180681e5ca58020228c171a-XXX&enrichSource=Y292ZXJQYWdlOzMxNTg2NzUzODtBUzo2MjU4OTM5MTU1MDg3MzhAMTUyNjIzNjE0NjgyNA%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Delowar_Hossain3?enrichId=rgreq-7c98d58c8180681e5ca58020228c171a-XXX&enrichSource=Y292ZXJQYWdlOzMxNTg2NzUzODtBUzo2MjU4OTM5MTU1MDg3MzhAMTUyNjIzNjE0NjgyNA%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Delowar_Hossain3?enrichId=rgreq-7c98d58c8180681e5ca58020228c171a-XXX&enrichSource=Y292ZXJQYWdlOzMxNTg2NzUzODtBUzo2MjU4OTM5MTU1MDg3MzhAMTUyNjIzNjE0NjgyNA%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Kyung_Hee_University_Computer?enrichId=rgreq-7c98d58c8180681e5ca58020228c171a-XXX&enrichSource=Y292ZXJQYWdlOzMxNTg2NzUzODtBUzo2MjU4OTM5MTU1MDg3MzhAMTUyNjIzNjE0NjgyNA%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Delowar_Hossain3?enrichId=rgreq-7c98d58c8180681e5ca58020228c171a-XXX&enrichSource=Y292ZXJQYWdlOzMxNTg2NzUzODtBUzo2MjU4OTM5MTU1MDg3MzhAMTUyNjIzNjE0NjgyNA%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Palash_Uddin?enrichId=rgreq-7c98d58c8180681e5ca58020228c171a-XXX&enrichSource=Y292ZXJQYWdlOzMxNTg2NzUzODtBUzo2MjU4OTM5MTU1MDg3MzhAMTUyNjIzNjE0NjgyNA%3D%3D&el=1_x_10&_esc=publicationCoverPdfAn Extractive Text Summarization Technique for Bengali Document(s) using K-means Clustering Algorithm Sumya Akter1, Aysa Siddika Asa2, Md. Palash Uddin3, Md. Delowar Hossain4, Shikhor Kumer Roy5, and Masud Ibn Afjal6 Faculty of Computer Science and Engineering, Hajee Mohammad Danesh Science and Technology University (HSTU), Dinajpur-5200, Bangladesh 1sumya.hstu@gmail.com, 2asha.cse12@gmail.com, 3palash_cse@hstu.ac.bd, 4delowar.cit@gmail.com, 5shikhorroy.cse12@gmail.com and 6masud@hstu.ac.bd Abstract— Text summarization, a field of data mining, is very important for developing various real-life applications. Many techniques have been developed for summarizing English text(s). But, a few attempts have been made for Bengali text because of its some multifaceted structure. This paper presents a method for text summarization which extracts important sentences from a single or multiple Bengali documents. The input document(s) should be pre-processed by tokenization, stemming operation etc. Then, word score is calculated by Term- Frequency/Inverse Document Frequency (TF/IDF) and sentence score is determined by summing up its constituent words’ scores with its position. Cue and skeleton words have also been considered to calculate the sentence score. For single or multiple documents, K-means clustering algorithm has been applied to produce the final summary. The experimental result shows satisfactory outputs in comparison to the existing approaches possessing linear run time complexity. Keywords— data mining; text summarization; extractive summarization; bengali document(s) summarization; TF*IDF; K-means clustering algorithm I. INTRODUCTION Nowadays, the use of Internet has caused a rapid growth of electronic data which needed to process, store, and manage. Sometimes, it is difficult to find the exact information from large amount of data or big data. Big data [1] has the potential to be mined for information and data mining is essential to find out the proper information what we need. When data are being accessed from such a huge repository of e-documents, hundreds and thousands documents are retrieved through data mining. It also finds the correlations or the patterns among dozens of fields in large relational databases [2]-[3]. Data mining’s roots are traced back along three family lines: classical statistics, artificial intelligence, and machine learning [4]. Data mining is thus the process used to describe knowledge in databases which is very much useful for extracting and identifying useful information and subsequent knowledge from databases. The extracted patterns from the database are then used to build data mining models, and can be used to predict performance and behavior with high</s>
|
<s>accuracy. It utilizes descriptive (e.g. summarization, clustering, sequence discovery etc.) and predictive (e.g. classification, regression, time series analysis etc.) data mining approaches in order to discover hidden information [5]. As a field of data mining, text summarization is one of the most popular research areas to extract main theme from large volume of data. It is generally used to denote any system that analyzes large quantities of natural language text and detects lexical or linguistic usage patterns in an attempt to extract probably useful information. Essentially, text summarization techniques are classified as extractive and abstractive. Extractive techniques perform text summarization by selecting sentences of documents according to some criteria. Abstractive techniques attempt to improve the coherence among sentences by eliminating redundancies and clarifying the contest of sentences. Sentence scoring is the most used technique for extractive text summarization. So, extractive summarization involves assigning saliency measure to some units (e.g. sentences, paragraphs) of the documents and extracting those with highest scores to include in the summary [6]. Moreover, people want to know any information in a precise way. Thus, they don’t like to read big size of document with redundant data to gather information. Thus, the technique of summarizing any text document helps to find informative sentences in order to save precious time. A. General Procedure of Text Summarization A general procedure for extractive methods that are usually performed in three steps is discussed below [7]: Step 1: First step creates a representation of the document. Some preprocessing such as tokenization, stop word removal, noise removal, stemming, sentence splitting, frequency computation etc. is applied here. Step 2: In this step, sentence scoring are performed. In general, three approaches are followed: • Word scoring–assigning scores to the most important words • Sentence scoring–verifying sentences features such as its position in the document, similarity to the title, etc. and • Graph scoring–analyzing the relationship between sentences. The general methods for calculating the score of any word are word frequency, TF/IDF, upper case, proper noun, word co-occurrence, lexical similarity, etc. The common phenomena used for scoring any sentences are Cue-phrases (‘‘in summary’’, ‘‘in conclusion’’, ‘‘our investigation’’, ‘‘the paper describes’’ and emphasizes such 978-1-5090-6004-7/17/$31.00 ©2017 IEEEas ‘‘the best’’, ‘‘the most important’’, ‘‘according to the study’’, ‘‘significantly’’, ‘‘important’’, ‘‘in particular’’, ‘‘hardly’’, ‘‘impossible’), sentence inclusion of numerical data, sentence length, sentence centrality, sentence resemblance to the title, etc. Also the popular graph scoring methods are text rank, bushy path of the node, aggregate similarity etc. Step 3: In this step, high score sentences using a specific sorting order for extracting the contents are select and then final summary is generated if it is a single document summarization. For multi document summarization, the process needs to extend. Each document produces one summary and then any clustering algorithm is applied to cluster the relevant sentences of each summary to generate the final summary. B. General Approaches for Extractive Text Summarization Extractive summarizers [8] find out the most relevant sentences in the document. These also remove the redundant data. Extractive</s>
|
<s>summarization is easier than abstractive summarization to bring out the summary. The common methods for extractive are TF/IDF method, cluster based method, graph theoretic approach, machine learning approach, LSA (Latent Semantic Analysis) method, text summarization with neural networks, automatic text summarization based on fuzzy logic, query based extractive text summarization, concept-obtained text summarization, text summarization using regression for estimating feature weights, multilingual extractive text summarization, topic-driven summarization MMR (Maximal Marginal Relevance) and centroid-based summarization, etc. II. RELATED WORK The previous works on single document or multi-document summarization are trying different directions to show the best result. Till now, various generic multi-document extraction based summarization techniques are already present. Most of them are for English rather than other natural languages like Bengali. In this section, we discussed some previous works on extractive text summarization. J. Zhang [9] presented an approach for multi-document text summarization using Cue-based hub-authority. It is a graph base summarization and detecting sub-topics by sentence clustering using K-nearest neighbor (KNN). Y. Ouyang [10] presented an integrated multi-document summarization approach which is based on hierarchical representation. In this paper, query relevancy and topic specificity are used for filtering process. Also it has calculated point-wise mutual information (PMI) for identifying the sub-sumption between words and high PMI is regarded as co-related. Then, hierarchical tree is constructed to produce the summarization. X. Li, J. Zhang and M. Xing [11] proposed an automatic summarization technique for Chinese text which is based on sub topic partition and sentence features. In this process, the sentence weight is calculated by LexRank algorithm combining with the score of its own features such as its length, position, cue words and structure. When applying LexRank algorithm some problems are found so automatic summarization is proposed based on maximum spanning tree and sentence features to overcome the same. P. Hu, T. S. He and H. Wang [12] proposed a multi-view sentence ranking for query biased summarization. The proposed approach first constructs two base rankers to rank all the sentences in a document-set from two independent but complementary views (i.e. query-dependent view and query-independent view), and then aggregates them into a consensus one. K. Sarkar [13] presented a summarization approach based on sentence clustering for multi-documents text in which sentences are clustered using a similarity histogram based sentence-clustering algorithm to identify multiple sub-topics from the input set of related documents and selects the representative sentences from the appropriate clusters to form the final summary. A. Kogilavani [14] presented multi-document summarization using clustering and feature specific sentence extraction. V. K. Gupta [15] proposed a query focused extractive summarization approach for English text where single document summaries are combined using sentence clustering method to generate multi document summary. For clustering, semantic and syntactic similarity between sentences are used. A. R. Deshpande [16] presented another multi-document English text summarization technique using clustering method where documents are clustered using cosine similarity. A. Agrawal and U. Gupta [17] proposed an extractive clustering based technique for single document summarization. This clustering approach summarizes English text by</s>
|
<s>using K-means clustering algorithm. M. A. Uddin [18] presented a multi-document text summarization for Bengali text where TF-based technology is used for extracting the most significant contents from a set of Bengali documents. Another work proposed by M. Ibrahim [19] for Bengali document summarization which is based on sentence scoring and ranking. III. PROPOSED TECHNIQUE The Bengali document summarizer is a natural language processing (NLP) application of data mining which is proposed to extract the most important information of the document(s). We are using sentence clustering approach to generate summary from both single and multi-documents. The proposed technique is used for summarizing Bengali document(s). In this technique, the common preprocessing steps including noise removal, tokenization, stop word removal, stemming [20] are used. TF*IDF [21] is used for calculating each word score. Then, score of each sentence is calculated by the total sum of the words with its position [19]. If the sentence contains any Cue word or skeleton word, then the score is increased by 1[19]. Next, the document is stored in a separate file with its corresponding sentences’ scores. Again for multi documents, for each document, one by one, preprocessing, word scoring and sentence scoring operations are repeated as mentioned above and also the documents are stored in the same file that is all the documents are merged. After that, the sentences are sorted in descending order and it has been considered that the highest score as the centroid 1 and the lowest score as the centroid 2 to apply the K-means clustering algorithm [1], [22]. Then, top K sentences are extracted from each cluster and the final summary is generated. Here, K sentences can be measured as 30% of sentences of the original merged document. A. Flow Chart of the Proposed Technique The proposed extractive technique of summarizing Bengali document(s) is illustrated with the following flow chart representation in Fig. 1. TF/IDF: Term Frequency/Inverse Document Frequency SC: Sentence Score PV: Position Value Figure 1. Extractive Bengali document(s) summarization technique B. Pseudo-code of the Proposed Technique TEXTSUM ( ) is the caller function that calls two procedures Stemming ( ) & k-means_algorithm ( ) to generate the final summary. Procedure: TEXTSUM (SC, COUNT, K, N, TotS, CHECK) i. [Start with COUNT ] For single document summarization, set COUNT:=1. and for multi-documents summarization, set COUNT:=N, N is the no. of documents. ii. [For the current document] Count the total number of sentences, TotS. iii. Set CHECK:=1. iv. Repeat step (v) to (xvi) while CHECK<= TotS. v. Remove noise from sentence, S. vi. Tokenize each sentence S. vii. Optionally remove stop word from S. viii. Call procedure Stemming ( ). ix. [Calculate the score TF of each word using TF/IDF ] TF= , ∗ = log ( + 1) x. [Calculate the score (SC) of each sentence, = Position of the sentences.] = ∑ + PV = √ xi. [Check S for cue words] If S contains Cue words, then increase, SC:=SC+1. xii. [Check S for skeleton word] If S contains</s>
|
<s>skeleton word, then increase, SC:=SC+1. xiii. CHECK: = CHECK+1, goto step (iv) xiv. Store the document in a file with scores. xv. COUNT:=COUNT-1 xvi. [Check for another document to calculate sentence score] If COUNT! = 0, then goto step (ii) Else, go to step (xvii). xvii. Sort the stored sentence scores in decreasing order. xviii. [Cluster the document using k-means algorithm] Call procedure k-means_algorithm () xix. Extract top K sentences from each cluster to get the final summary of the document(s). xx. END. Procedure: Stemming () i. Read token from the line. ii. Load suffix lists from the stored file. iii. [Check suffix list with the input token ] If the token matches with any suffix, then discard the suffix and mark token as a root word. iv. Repeat step (i) until all token is processed. Procedure: k-means_algorithm (m1 & m2, C1&C2, d1&d2, , av1,av2 ) i. [ Initialize centroid m1 & m2 ] m1:= Highest score & m2:= Lowest score. ii. [ Measure distance from the centroids to each sentence S ] d1:= m1- , d2:= m2- iii. [Check the distance either negative or not] If d1<0 then d1:= -d1 If d2<0 then d2:= -d2 vi. [Create cluster] If d1<d2 then C1:= Else C2:= v. [Calculating new mean] Find average value (av1, av2) of clusters C1 & C2. vi. [Assign the average value to the mean] m1:= av1 & m2:= av2 vii. Repeat the steps (ii) to (vi) until the values of m1 and m2 in two consecutive iterations remain unchanged. viii. Return. 1) Explanation of the Pseudo-code The steps of the proposed method are discussed here in detail: i. Preprocessing The actions performed in this step are: • Noise removal is concerned with removing header, footer, etc. from the document. • Tokenization separates each word into lexical form. Words are separated by কমা, দাঁিড় etc. • The stop words are function words like eবং, aথবা, িকn, aনয্থায়, িকংবা,মাt etc. and they may be removed. • Stemming – A word in different forms in the same document need to be converted to their original form for simplicity like বাংলােদেশ, বাংলােদেশর, বাংলােদশেক, বাংলােদেশo etc. should be converted to their original form বাংলােদশ . In the proposed technique, we used the rules for stemming any word that are illustrated in [20]. Let’s consider an example: কিরম কাজিট করেছ. After stemming it will be কিরম কাজ কর. Some examples of word stemming are shown in Table I. TABLE I. SOME WORD STEMMING EXAMPLES Suffix Original words After Stemmingi # eটাi, েসটাi # eটা, েসটা েতা # হয়েতা, করলেতা # হয়, করল েক # eটােক, আমােক # eটা, আমা ে◌.ে◌ -> ◌া. # েহেস, েনেচ, # হাসা, নাচা, ে◌.ে◌িছেলন -> ◌া. # েহেসিছেলন, েনেচিছেলন # হাসা, নাচা ii. Scoring Process • Word scoring technique (TF/IDF): This approach is used for counting the words. If there are more unique words in a given sentence, then the sentence is relatively more important [23]. The TF/IDF score is calculated as follows: = , ∗ = log</s>
|
<s>( + 1) where, TF=Term Frequency =Number of occurrence of the word in the sentence S =Inverse document frequency N=Total number of the sentences in the text =Number of sentences in which word occurs • Sentence scoring (SC): = ∑ + PV = √ Here, = Position of the sentences & PV =Position value. For example =1, if it is the first sentence of a document. • Cue Words - If we found any cue word (e.g., েমাটকথা,aবেশেষ, iিতমেধয্, েযেহতু etc.) in any sentence, then the score of the sentence is incremented by 1. • Skeleton word – If we found any skeleton word (e.g., headline of any document), then again the score of the sentence incremented by 1. • Store the document in a separate file with the corresponding sentences’ scores for further processing. For multi-documents, all the previous processes are applied for each document and stored in the same file where they are then merged. So for further processing, scores are sorted according to decreasing order. iii. Applying K-means clustering algorithm After sorting the scores the lowest and the highest scores are assigned as two centroids for the K-means algorithms and the distance from each centroid to each sentence is measured. Nearest distance from one centroid defines that cluster. Thus, two clusters are created and for next iteration, centroid values are updated. For this, the average value of each cluster is calculated and assigned them as new centroids respectively. This process is repeated until two consecutive iterations produce same result. Al last, top K sentences are extracted from each cluster to produce the final summary. IV. EXPLANATION WITH AN EXAMPLE Let’s consider there are two Bengali documents to be summarized. The first document contains 10 sentences and 131 words is [24]: বাংলােদেশর ত ণ েpাgামার o pযুিkগত িবষেয় ঈষর্নীয় সাফলয্লাভকারীেদর িনেয় পৃিথবী জুেড়i pশংসা চলেছ। বাংলােদশ ধীের ধীের eিগেয় যােc pযুিkগত uৎকষর্তার িদেক। eখনকার িশkাথ o ত ণ pজn িবjান o িবjান িনভর্ র পড়ােশানা িনেয় aেনক েবিশ সেচতন। সরকার তাi ei েktিটেক আেরা বড় eকিট pয্াটফমর্ িহেসেব দাঁড়া করােত চান।িবjান o pযুিk িনেয় যারা খবর রােখন, তারা িন য়i সামািজক মাধয্ম িকংবা নানা ধরেণর খবের েনেছন িডিজটাল oয়াlর্ 2016 eর। আগামী 19-21 aেkাবর 2016 বসুnরা কনেভনশন িসিটেত aনুি ত হেব েদেশর সবেচেয় বড় আiিসিট iেভn িডিজটাল oয়াlর্ 2016। eখােন থাকেব নতুন নতুন udাবন o pযুিkগত নানা আেলাচনাo থাকেছ ei আেয়াজেন। ei aনু ােন নতুন udাবন কয্াটাগরীেত sুল, কেলজ o িব িবদয্ালেয়র িবjানমন িশkাথ রা তােদর pকl uপsাপেনর সুেযাগ পােব। তারা েযখােন ei pদশর্নীিট করেব তার নাম হেc, “iেনােভশন েজান”। ei দািয়t o তttাবধােন থাকেছ গল েডেভলপার gপস বাংলা। The second document on the same topic contains 14 sentences and 219 words is [25]: ‘ননsপ বাংলােদশ’ েsাগানেক সামেন েরেখ বাংলােদেশ হল িতন িদনবয্াপী তথয্ o েযাগােযাগ pযুিk িবষয়ক েদেশর সবেচেয় বড় েমলা ‘িডিজটাল oয়াlর্ -2016’। বুধবার রাজধানী ঢাকার inারনয্াশনাল কনেভনশন িসিট বসুnরায় (আiিসিসিব) e েমলার uেdাধন কেরন pধানমntী েশখ হািসনা। িতন িদনবয্াপী ei েমলা আগামী kবার পযর্n pিত িদন সকাল 10টা েথেক রাত 8টা পযর্n সকেলর জনয্ েখালা</s>
|
<s>থাকেব।uেdাধনী aনু ােন pধান aিতিথর ভাষেণ pধানমntী েশখ হািসনা বেলেছন, “আiিসিট বয্বহাের ত ণ জনেগা ী িনেয় আমরা ‘লািনর্ং aয্াn আিনর্ং’ pকl চালু কেরিছ। pকেlর আoতায় 50 হাজার ত ণ-ত ণীর pিশkেণর বয্বsা করা হেয়েছ।”িডিজটাল িসিকuিরিট aয্াk-2016 করা হেc জািনেয় েশখ হািসনা বেলন, “আoয়ািম িলগ সরকার েদেশর sােথর্ aথর্ বয্য় কের সাবেমিরন েকবেলর মাধয্েম যুk হেয়েছ। আমরা িশkা বয্বsা unত করেত সারা েদেশ 30 হাজার মািlিমিডয়া kাস চালু কেরিছ। 2018 সােলর মেধয্ আরo দশ হাজার েশখ রােসল িডিজটাল লয্াব চালু করা হেব।” iেতােমেধয্ েদেশর pায় সব uপ-েজলােতi ি -িজ েপৗঁেছ িগেয়েছ। আগামী 2017 সােলর মেধয্ েফার-িজ চালু হেয় যােব বেলo জানান pধানমntী।বাংলােদশ সংবাদ সংsা (বাসস) জািনেয়েছ, িডিজটাল িবষয়ক নয়া pযুিk o aিভনবt িবষেয় ধারণা o তথয্ আদানpদােনর জনয্ ei েমলা। 40িট মntণালয় িডিজটাল বাংলােদশ িহেসেব কী কী পিরেষবা িদেc তার খঁুিটনািট তুেল ধরা হেব ei েমলায়। eেত শতািধক েবসরকাির pিত ান তােদর িডিজটাল কাযর্kম তুেল ধরেব। িতন িদনবয্াপী েমলায় মাiেkাসফ্ট, েফসবুক, eকেসন্চার, িব বয্া , েজডিটi, য়াoেয়-সহ িব pিত ােনর 43 জন িবেদিশ বkা-সহ i শতািধক বkা 18িট েসশেন aংশ েনেবন। The score calculation of the words for the first sentence of the first document is shown in Table II. TABLE II. WORD SCORE OF THE FIRST SENTENCE Words Stemming Number of occurrence of words in sentence(s) ( , ) Number of sentence in which occurs ( ) Score of each word, TF= , *log( +1) বাংলােদেশর বাংলােদশ 1 1 1.04ত ণ ত ণ 1 2 0.78েpাgামার েpাgাম 1 1 1.04o o 1 6 0.43pযুিkগত pযুিk 1 3 0.64িবষেয় িবষয় 1 1 1.04ঈষর্নীয় ঈষর্া 1 1 1.04সাফলয্লাভকারীেদর সাফলয্ 1 1 1.04িনেয় িনেয় 1 3 0.64পৃিথবী পৃিথবী 1 1 1.04জুেড়i জুেড় 1 1 1.04pশংসা pশংসা 1 1 1.04চলেছ চল 1 1 1.04Similarly, using TF/IDF all the word scores are calculated. Then, the score of every sentence is calculated by summing up the constituent words’ scores with their position using the mentioned formula. Also, the score of a sentence is increased when it contains any cue word or skeleton word or both. After getting the sentences’ score of all documents, they have been sorted in a merged file which is shown in Table III for the considered example of two documents having 24 sentences in total. TABLE III. MERGED FILE WITH SENTENCE SCORES Sentence score notation Score Sentence SC(1) 28.36 ” িডিজটাল িসিকuিরিট aয্াk-2016 করা হেc জািনেয় েশখ হািসনাবেলন, “আoয়ািম িলগ সরকার েদেশর sােথর্ aথর্ বয্য় কের সাবেমিরন েকবেলর মাধয্েম যুk হেয়েছ। SC(2) 26.14 িতন িদনবয্াপী েমলায় মাiেkাস , েফসবুক, eকেসন্চার, িব বয্া , েজডিটi, য়াoেয়-সহ িব pিত ােনর 43 জন িবেদিশ বkা-সহ i শতািধক বkা 18িট েসশেন aংশ েনেবন। SC(3) 24.77 ননsপ বাংলােদশ’ েsাগানেক সামেন েরেখ বাংলােদেশ হল িতনিদনবয্াপী তথয্ o েযাগােযাগ pযুিk িবষয়ক েদেশর সবেচেয় বড় েমলা ‘িডিজটাল oয়াlর্ -2016’। SC(4) 24.68 িতন িদনবয্াপী ei েমলা আগামী kবার পযর্n pিত িদন সকাল 10টােথেক রাত 8টা পযর্n সকেলর জনয্ েখালা থাকেব। SC(5) 24.50 uেdাধনী aনু ােন pধান aিতিথর ভাষেণ pধানমntী েশখ হািসনাবেলেছন, “আiিসিট বয্বহাের ত ণ জনেগা ী িনেয় আমরা ‘লািনর্ং aয্াn আিনর্ং’ pকl চালু কেরিছ। SC(6) 21.52 বাংলােদশ সংবাদ সংsা</s>
|
<s>(বাসস) জািনেয়েছ, িডিজটাল িবষয়ক নয়াpযুিk o aিভনবt িবষেয় ধারণা o তথয্ আদানpদােনর জনয্ ei েমলা। SC(7) 20.59 40িট মntণালয় িডিজটাল বাংলােদশ িহেসেব কী কী পিরেষবা িদেc তারখঁুিটনািট তুেল ধরা হেব ei েমলায়। SC(8) 20.26 িবjান o pযুিk িনেয় যারা খবর রােখন, তারা িন য়i সামািজক মাধয্মিকংবা নানা ধরেণর খবের েনেছন িডিজটাল oয়াlর্ 2016 eর। SC(9) 19.39 আগামী 19-21 aেkাবর 2016 বসুnরা কনেভনশন িসিটেত aনুি ত হেবেদেশর সবেচেয় বড় আiিসিট iেভn – িডিজটাল oয়াlর্ 2016। SC(10) 17.75 বুধবার রাজধানী ঢাকার inারনয্াশনাল কনেভনশন িসিট বসুnরায়(আiিসিসিব) e েমলার uেdাধন কেরন pধানমntী েশখ হািসনা। SC(11) 15.67 আমরা িশkা বয্বsা unত করেত সারা েদেশ 30 হাজার মািlিমিডয়াkাস চালু কেরিছ। SC(12) 15.37 2018 সােলর মেধয্ আরo দশ হাজার েশখ রােসল িডিজটাল লয্াব চালুকরা হেব। SC(13) 14.87 eখনকার িশkাথ o ত ণ pজn িবjান o িবjান িনভর্ র পড়ােশানািনেয় aেনক েবিশ সেচতন। SC(14) 14.55 ei aনু ােন নতুন udাবন কয্াটাগরীেত sুল, কেলজ o িব িবদয্ালেয়র িবjানমন িশkাথ রা তােদর pকl uপsাপেনর সুেযাগ পােব। SC(15) 13.25 আগামী 2017 সােলর মেধয্ েফার-িজ চালু হেয় যােব বেলo জানানpধানমntী। SC(16) 12.85 বাংলােদেশর ত ণ েpাgামার o pযুিkগত িবষেয় ঈষর্নীয়সাফলয্লাভকারীেদর িনেয় পৃিথবী জুেড়i pশংসা চলেছ। SC(17) 11.67 সরকার তাi ei েktিটেক আেরা বড় eকিট pয্াটফমর্ িহেসেব দাঁড়াকরােত চান। SC(18) 11.53 eখােন থাকেব নতুন নতুন udাবন o pযুিkগত নানা আেলাচনাoথাকেছ ei আেয়াজেন। SC(19) 11.03 pকেlর আoতায় 50 হাজার ত ণ-ত ণীর pিশkেণর বয্বsা করাহেয়েছ। SC(20) 10.45 বাংলােদশ ধীের ধীের eিগেয় যােc pযুিkগত uৎকষর্তার িদেক। SC(21) 9.74 ” iেতােমেধয্ েদেশর pায় সব uপ-েজলােতi ি -িজ েপৗঁেছ িগেয়েছ। SC(22) 9.68 তারা েযখােন ei pদশর্নীিট করেব তার নাম হেc, “iেনােভশন েজান”। SC(23) 9.47 eেত শতািধক েবসরকাির pিত ান তােদর িডিজটাল কাযর্kম তুেল ধরেব। SC(24) 8.25 ei দািয়t o তttাবধােন থাকেছ গল েডেভলপার gপস বাংলা। Then, K-means clustering algorithm has been applied with m1=28.36 and m2=8.25 as the centroids. As discussed above, the final iteration is shown in Table IV. TABLE IV. FINAL ITERATION After extracting top 5 (K) sentences from each cluster, the finally produced summary, which contains 10 sentences and 173 words, is shown below: ” িডিজটাল িসিকuিরিট aয্াk-2016 করা হেc জািনেয় েশখ হািসনা বেলন, “আoয়ািম িলগ সরকার েদেশর sােথর্ aথর্ বয্য় কের সাবেমিরন েকবেলর মাধয্েম যুk হেয়েছ। িতন িদনবয্াপী েমলায় মাiেkাসফ্ট, েফসবুক, eকেসন্চার, িব বয্া , েজডিটi, য়াoেয়-সহ িব pিত ােনর 43 জন িবেদিশ বkা-সহ i শতািধক বkা 18িট েসশেন aংশ েনেবন। ‘ননsপ বাংলােদশ’ েsাগানেক সামেন েরেখ বাংলােদেশ হল িতন িদনবয্াপী তথয্ o েযাগােযাগ pযুিk িবষয়ক েদেশর সবেচেয় বড় েমলা ‘িডিজটাল oয়াlর্ -2016’। িতন িদনবয্াপী ei েমলা আগামী kবার পযর্n pিত িদন সকাল 10টা েথেক রাত 8টা পযর্n সকেলর জনয্ েখালা থাকেব।uেdাধনী aনু ােন pধান aিতিথর ভাষেণ pধানমntী েশখ হািসনা বেলেছন, “আiিসিট বয্বহাের ত ণ জনেগা ী িনেয় আমরা ‘লািনর্ং aয্াn আিনর্ং’ pকl চালু কেরিছ।আমরা িশkা বয্বsা unত করেত সারা েদেশ 30 হাজার মািlিমিডয়া kাস চালু কেরিছ।2018 সােলর মেধয্ আরo দশ হাজার েশখ রােসল িডিজটাল লয্াব চালু করা হেব। eখনকার িশkাথ o ত ণ pজn িবjান o িবjান িনভর্ র পড়ােশানা িনেয় aেনক েবিশ সেচতন।ei aনু ােন নতুন udাবন কয্াটাগরীেত sুল, কেলজ o িব িবদয্ালেয়র িবjানমন িশkাথ রা তােদর pকl uপsাপেনর সুেযাগ পােব। আগামী</s>
|
<s>2017 সােলর মেধয্ েফার-িজ চালু হেয় যােব বেলo জানান pধানমntী। V. EXPERIMENTAL RESULT AND DISCUSSION The proposed extractive Bengali document(s) summarization technique is implemented using the IDE for Java application, Centroid Cluster Score Sentencem1 = 23.25 29.03 ” িডিজটাল িসিকuিরিট aয্াk-2016 করা হেc জািনেয় েশখ হািসনা বেলন, “আoয়ািম িলগ সরকার েদেশর sােথর্ aথর্ বয্য় কের সাবেমিরন েকবেলর মাধয্েম যুk হেয়েছ। 26.78 িতন িদনবয্াপী েমলায় মাiেkাসফ্ট, েফসবুক, eকেসন্চার, িব বয্া , েজডিটi, য়াoেয়-সহ িব pিত ােনর 43 জন িবেদিশ বkা-সহ i শতািধক বkা 18িট েসশেন aংশ েনেবন। 25.35 ‘ননsপ বাংলােদশ’ েsাগানেক সামেন েরেখ বাংলােদেশ হল িতন িদনবয্াপী তথয্ o েযাগােযাগ pযুিk িবষয়ক েদেশর সবেচেয় বড় েমলা ‘িডিজটাল oয়াlর্ -2016’। 25.26 িতন িদনবয্াপী ei েমলা আগামী kবার পযর্n pিত িদন সকাল 10টা েথেকরাত 8টা পযর্n সকেলর জনয্ েখালা থাকেব। 25.09 uেdাধনী aনু ােন pধান aিতিথর ভাষেণ pধানমntী েশখ হািসনা বেলেছন, “আiিসিট বয্বহাের ত ণ জনেগা ী িনেয় আমরা ‘লািনর্ং aয্াn আিনর্ং’ pকl চালু কেরিছ। 22.07 বাংলােদশ সংবাদ সংsা (বাসস) জািনেয়েছ, িডিজটাল িবষয়ক নয়া pযুিk o aিভনবt িবষেয় ধারণা o তথয্ আদানpদােনর জনয্ ei েমলা। 21.08 40িট মntণালয় িডিজটাল বাংলােদশ িহেসেব কী কী পিরেষবা িদেc তার খঁুিটনািটতুেল ধরা হেব ei েমলায়। 20.26 িবjান o pযুিk িনেয় যারা খবর রােখন, তারা িন য়i সামািজক মাধয্ম িকংবা নানা ধরেণর খবের েনেছন িডিজটাল oয়াlর্ 2016 eর। 19.39 আগামী 19-21 aেkাবর 2016 বসুnরা কনেভনশন িসিটেত aনুি ত হেব েদেশর সবেচেয় বড় আiিসিট iেভn – িডিজটাল oয়াlর্ 2016। 18.16 বুধবার রাজধানী ঢাকার inারনয্াশনাল কনেভনশন িসিট বসুnরায় (আiিসিসিব) e েমলার uেdাধন কেরন pধানমntী েশখ হািসনা। m2 = 11.36 16.03 আমরা িশkা বয্বsা unত করেত সারা েদেশ 30 হাজার মািlিমিডয়া kাস চালুকেরিছ। 15.73 2018 সােলর মেধয্ আরo দশ হাজার েশখ রােসল িডিজটাল লয্াব চালু করাহেব। 14.87 eখনকার িশkাথ o ত ণ pজn িবjান o িবjান িনভর্ র পড়ােশানা িনেয়aেনক েবিশ সেচতন। 14.55 ei aনু ােন নতুন udাবন কয্াটাগরীেত sুল, কেলজ o িব িবদয্ালেয়র িবjানমন িশkাথ রা তােদর pকl uপsাপেনর সুেযাগ পােব। 13.56 আগামী 2017 সােলর মেধয্ েফার-িজ চালু হেয় যােব বেলo জানান pধানমntী। 12.85 বাংলােদেশর ত ণ েpাgামার o pযুিkগত িবষেয় ঈষর্নীয় সাফলয্লাভকারীেদরিনেয় পৃিথবী জুেড়i pশংসা চলেছ। 11.67 সরকার তাi ei েktিটেক আেরা বড় eকিট pয্াটফমর্ িহেসেব দাঁড়া করােতচান। 11.53 eখােন থাকেব নতুন নতুন udাবন o pযুিkগত নানা আেলাচনাo থাকেছ eiআেয়াজেন। 11.28 pকেlর আoতায় 50 হাজার ত ণ-ত ণীর pিশkেণর বয্বsা করা হেয়েছ। 10.45 বাংলােদশ ধীের ধীের eিগেয় যােc pযুিkগত uৎকষর্তার িদেক।9.97 ” iেতােমেধয্ েদেশর pায় সব uপ-েজলােতi ি -িজ েপৗেঁছ িগেয়েছ। 9.71 eেত শতািধক েবসরকাির pিত ান তােদর িডিজটাল কাযর্kম তুেল ধরেব।9.69 তারা েযখােন ei pদশর্নীিট করেব তার নাম হেc, “iেনােভশন েজান”। 8.25 ei দািয়t o তttাবধােন থাকেছ গল েডেভলপার gপস বাংলা।“Netbeans IDE 8.0”, The performance analysis is performed in a 2.50 GHz Intel® core™ i5 CPU with 4GB RAM running Windows 7 ultimate operating system. The comparison of the proposed technique with the existing approaches is shown in Table V. So, the proposed technique summarizes both single and multiple Bengali documents though noise removing, tokenizing and stemming, scoring each word by TF/IDF and then each sentence for applying the K-means clustering algorithm. TABLE V. COMPARISON WITH EXISTING METHODS</s>
|
<s>Author name/ Technique Language type Document type Major Operations K. Sarkar [13] English Multiple Preprocessing, Clustering using cosine similarity, Cluster ordering T.J. Siddiqui [15] English Multiple Preprocessing, Sentence scoring (Feature based), Clustering using syntactic & semantic similarity A. R. Deshpand [21] English Multiple (Query based) Preprocessing (TF*IDF), Sentence scoring (Feature based), Clustering using cosine similarity A. Agrawal [17] English Single Word score (TF*IDF), Sentence scoring, K-means clustering M. A. Uddin [18] Bengali Multiple Preprocessing, Sentence scoring (TTF), Cosine Similarity measure, A* algorithm M. I. A. Efat [19] Bengali Single Preprocessing, Sentence scoring (Feature based), Sentence ranking Proposed Technique Bengali Single or Multiple Preprocessing (Noise removal, Tokenization, Stemming), Word scoring (TF/IDF), Sentence Scoring, K-means clustering algorithm Several experiments have been conducted to evaluate the proposed technique. Some experiments are evaluated on single document and some are on multi-documents summarization. The proposed technique produced final summary from single or multiple documents which has 30% sentences of the original merged document. The proposed technique simply gives expected performance in comparison to the existing approaches. The time complexity of the proposed technique is ( ), which offers linearity. The main pitfall is that sometimes, the sequence of summarized sentences is not synchronized. However, the technique can be applied in various summarizing fields like summarizing similar articles from different newspapers, blogs, books etc. VI. CONCLUTION In this paper, an extractive-based Bengali text summarization technique has been proposed both for single or multiple documents. In this summarization, some important sentences are extracted from the original document(s). We have compared the results with different extractive techniques and also measured the run-time complexity that shows the performance of the proposed technique is improved. According to the result of the proposed technique, we can conclude that it reduces the redundancy and provides better summarization. How to measure similarities is also a crucial issue in sentence clustering based summarization approach. The better similarity measure will improve the clustering performance and this may improve the summarization performance. We can measure the relevancy of the sentences using syntactic and semantic similarity in future. REFERENCES [1] Unnamed, Big Data. [Online]. Available: http://searchcloudcomputing.techtarget.com/definition/big-data-Big-Data [2] R. M. Chezian, Ahilandeeswari. G., “A Survey on Approaches for Frequent Item Set Mining on Apache Hadoop”, International Journal for trends in Engineering & Technology, Vol. 3 Issue 3, India, March 2015. [3] A. Totewar, Data mining: Concepts and Techniques. [Online]. Available http://www.slideshare.net/akannshat/data-mining-15329899, India, 2016. [4] R. Pradheepa, K. Pavithra, “A Survey on Overview of Data Mining”, International Journal of Advanced Research in Computer and Communication Engineering, Vol. 5, Issue 8, India, 2016. [5] Unnamed, Data Mining-Applications & Trends. [Online]. Available: https://www.tutorialspoint.com/data_mining/dm_applications_trends.htm [6] F. El-Ghannam, and T. El-Shishtawy, “Multi-Topic Multi-Document Summarizer”, International Journal of Computer Science & Information Technology (IJCSIT), Vol 5 No 6, India, 2013. [7] A. Nenkova and K. McKeown, “Automatic Summarization”, Foundations and Trends® in Information Retrieval, Vol. 5, Nos. 2–3, p. 103–233, Boston - Delft, 2011. [8] V. Gupta, G. S, Lehal, “A Survey of Text Summarization Extractive Techniques”, Journal of Emerging technologies in web intelligence,</s>
|
<s>Vol.2, No. 3, India, 2010. [9] J. Zhang , L. Sun , Q. Zhou, “Cue-based Hub-Authority approach for Multi- document Text Summarization”, IEEE International Conference on Natural Language Processing and Knowledge Engineering, p. 642 – 645, China, 2005. [10] Y. Ouyang, W. Li, Q. Lu, “An Integrated Multi-document Summarization Approach based on Word Hierarchical Representation”, Proceedings of the ACL-IJCNLP, p. 113-116, China, 2009. [11] X. Li, J. Zhang, M. Xing, “Automatic Summarization for Chinese text based on Sub Topic Partition and Sentence Features”, IEEE 2nd International Symposium on Intelligence Information Processing and Trusted Computing (IPTC), China, 2011. [12] P. Hu, T. He, H. Wang, “Multi-View Sentence Ranking for Query-Biased Summarization”, IEEE International Conference on Computational Intelligence and Software Engineering (CISE), Dec. China, 2010. [13] K. Sarkar, “Sentence Clustering-based Summarization of Multiple Text Documents” TECHNIA – International Journal of Computing Science and Communication Technologies, Vol. 2, No. 1, India, 2009. [14] A. Kogilavani, P. Balasubramani,“Clustering and Feature specific sentence extraction based summarization of multi-documents ”, International journal of computer science & information Technology (IJCSIT), Vol.2, No.4, India, August 2010. [15] T. J. Siddiki ,V. K. Gupta, “Multi-document Summarization using Sentence Clustering”, IEEE Proceedings of 4th International Conference on Intelligent Human Computer Interaction, India, 2012. [16] R. Kamal, Bangla-Stemmer. [Online]. Available: https://github.com/rafikamal/Bangla-Stemmer [17] A. Agrawal, and U. Gupta, “Extraction based approach for text summarization using k-means clustering”, IEEE International Journal of Scientific and Research Publications, Vol. 4, Issue 11, India, 2014. [18] M. A. Uddin, K. Z. Sultana and M. A. Alom, “A Multi-Document Text Summarization for Bengali Text”, IEEE International Forum on Strategic Technology (IFOST), Bangladesh, 2014. [19] M. I. A. Efat, M. Ibrahim , H. Kayesh, “Automated Bangla Text Summarization by Sentence Scoring and Ranking”, IEEE International Conference on Informatics, Electronics & Vision (ICIEV), Bangladesh, 2013. [20] A. Mhatre, Implementation of k-means algorithm in C++. [Online]. Available: http://ankurm.com/implementation-of-k-means-algorithm-in-c/ [21] A. R. Deshpande, Lobo L. M. R. J, “Text Summarization using Clustering Technique”, International Journal of Engineering Trends and Technology (IJETT) – Vol. 4 Issue 8, India, 2013. [22] A. H. Witten, Text mining. [Online]. Available: http://www.cos.ufrj.br/~jano/LinkedDocuments/_papers/aula13/04-IHW Textmining.pdf [23] R. Ferreira, L. de S. Cabral, R. D. Lins , G. P. e Silva, F. Freitas , G. D. C. Cavalcanti, R. Lima, S. J. Simske, and L. Favaro, “Assessing sentence scoring techniques for extractive text summarization”, ELSIVIER International Journal of Expert systems with Applications, Netherlands, 2013. [24] আহনাফ রাতুল (2016), হেয়েছ েদেশর সবচাiেত বড় আiিসিট iেভn, [Online]. Available: http://www.bigganprojukti.com/?p=76344 [25] Anondobazar (2016), বাংলােদেশ সবেচেয় বড় pযুিk েমলা ‘ িডিজটাল oয়াlর্ -2016. [Online]. Available: http://www.anandabazar.com/bangladesh-news/bangladesh-s-biggest-technology-fair-digital-world-2016-kick-off-bng-dgtl-1.498224 View publication statsView publication statshttps://www.researchgate.net/publication/315867538</s>
|
<s>BACHELOR OF SCIENCE IN COMPUTER SCIENCE AND ENGINEERING Bengali Text Summarization Using TextRank, Fuzzy C-means and Aggregated Scoring Techniques AUTHORS Alvee Rahman Fahim Md Rafiq Ramkrishna Saha Ruhit Rafian SUPERVISOR Mr. Hossain Arif Assistant Professor Department of CSE A thesis submitted to the Department of CSE in partial fulfillment of the requirements for the degree of B.Sc. Engineering in CSE Department of Computer Science and Engineering BRAC University, Dhaka - 1212, Bangladesh December 2018 We would like to dedicate this thesis to our loving parents … Declaration We, hereby declare that this thesis report is based on our own work and research and this report has not been submitted anywhere for any other degree or professional qualifications. The contents in this report have been prepared by us for our final undergraduate thesis and any other materials of work by other researchers have been acknowledged and referenced in the reference section. Authors: Alvee Rahman Student ID: 15101036 Fahim Md Rafiq Student ID:15101056 Ramkrishna Saha Student ID: 15101024 Ruhit Rafian Student ID:14201028 Supervisor: Hossain Arif Assistant Professor, Department of Computer Science and Engineering BRAC UNIVERSITY December 2018 The thesis titled “Bengali Text Summarization Using TextRank, Fuzzy C-means and Aggregated Scoring Techniques” Submitted by: Alvee Rahman : 15101036 Fahim Md Rafiq : 15101056 Ramkrishna Saha : 15101024 Ruhit Rafian : 14201028 of Academic Year Fall 2018 has been found as satisfactory and accepted as partial fulfillment of the requirement for the Degree of BSc in Computer Science and Engineering Md. Abdul Mottalib Professor & Chairperson Department of CSE BRAC University Hossain Arif Assistant Professor Supervisor BRAC University Acknowledgement We take this opportunity to express our sincere gratitude to our supervisor Mr. Hossain Arif for his immense support during the period of our undergraduate thesis. His valuable guidance and immense knowledge in this field helped and motivated us in writing this thesis. We would also like to thank all the faculty members of the Department of Computer Science and Engineering, BRAC University, for helping us with all the necessary support. Abstract In this world, it is very difficult and time consuming for humans to summarize large documents, reports, news and research articles. Multiple text summarization techniques play vital roles in picking the important points and sentences thus reducing the time and effort required to read a whole article. Numerous summarization techniques have been applied to the English language but comparatively work on Bengali text summarization is still limited. Furthermore, in our country, Bangladesh, all summarization is mainly done by humans. Keeping that in mind we aim to find a simple way of summarizing Bengali texts with the technology at hand. Text summarization can be of two types, either abstractive or extractive. In this paper we will use extractive text summarization to summarize Bengali passages, using Fuzzy C-Means, TextRank and Aggregate Sentence Scoring methodologies. We have also done a comparative study, among the 3 methodologies we have used and aim to find the most precise methodology for Bengali text summarization. Table of Contents List of</s>
|
<s>Figures List of Equations List of Tables Nomenclature Chapter 1: Overview 1 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Thesis Orientation . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.4 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.5 Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Chapter 2: Literature Review 5 Chapter 3: Proposed Model 9 3.1 System Workflow 9 3.2 Dataset and Preprocessing 9 3.2.1 Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 3.2.2 Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . 11 3.3 Feature Extraction 14 3.3.1 TF-IDF. . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.3.2 Numerical Value . . . . . . . . . . . . . . . . . . . . . . 14 3.3.3 Sentence Length. . . . . . . . . . . . . . . . . . . . . . . 15 3.3.4 Cue/Skeleton Word. . . . . . . . . . . . . . . . . . . . . . 15 3.3.5 Topic Sentence . . . . . . . . . . . . . . . . . . . . . . . . 16 3.3.6 Sentence Position based . . . . . . . . . . . . . . . . . . . 16 3.4 Classifying Methodologies. . . . . . . . . . . . . . . . . . . . . . 18 3.4.1 Fuzzy C-Means Clustering. . . . . . . . . . . . . . . . . . 18 3.4.2 TextRank Algorithm. . . . . . . . . . . . . . . . . . . . . 22 3.4.3 Aggregated Sentence Scoring. . . . . . .</s>
|
<s>. . . . . . . . . 23 Chapter 4: Evaluation and Results 25 4.1 ROUGE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 4.2 Comparative Study and Analysis . . . . . . . . . . . . . . . . . . . 26 4.3 Test Articles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 4.4 Output Summaries . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 Chapter 5: Conclusion 35 References 37 List of Figures 3.1: Workflow of the System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 3.2: Number of centres and FPC values of FCM. . . . . . . . . . . . . . . . . . . 21 3.3: FCM clustering output. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 4.1: Bar chart comparing the number of common sentences in the summaries. . . 26 4.2: Bar chart for Test Article 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 4.3: Bar chart for Test Article 2. . . . . . . . . . . . . . . . . . . . . . . . . . . 28 List of Equations 3.1: Formula for TF-IDF Scoring. . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.2: Formula for Score Generation of TF-IDF. . . . . . . . . . . . . . . . . . . . 14 3.3: Formula for Normalizing TF-IDF score. . . . . . . . . . . . . . . . . . . . . 14 3.4: Formula for Numerical Value based Scoring. . . . . . . . . . . . . . . . . . 15 3.5: Formula for Sentence Length Based Scoring. . . . . . . . . . . . . . . . . . 15 3.6: Piecewise function for Sentence length scoring. . . . . . . . . . . . . . . . . 15 3.7: Formula for Cue/Skeleton Word Scoring. . .</s>
|
<s>. . . . . . . . . . . . . . . . . 16 3.8: Formula for Normalizing Cue/Skeleton word Score. . . . . . . . . . . . . . 16 3.9: Formula for Topic Sentence Score. . . . . . . . . . . . . . . . . . . . . . . . 16 3.10: Formula for Normalizing Topic Sentence word Score. . . . . . . . . . . . . 16 3.11: Piecewise function for Sentence Position Scoring. . . . . . . . . . . . . . . . 17 3.12: Formula of Objective Function. . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.13: Formula for Calculating Cluster Centers. . . . . . . . . . . . . . . . . . . . 20 3.14: Formula for Calculating Membership Values. . . . . . . . . . . . . . . . . . 20 3.15: Formula for Aggregate Sentence Scoring . . . . . . . . . . . . . . . . . . . 23 4.1: Formula for Recall Metric. . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 4.2: Formula for Precision Metric. . . . . . . . . . . . . . . . . . . . . . . . . . 25 4.3: Formula for F1 Measure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 List of Tables 3.1: Stemmed Output. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.2: Representation of stopword removal. . . . . . . . . . . . . . . . . . . . . . . 13 3.3: Score table of the score for test Article 1. . . . . . . . . . . . . . . . . . . . . 17 3.4: PCA Score table for Test Article 1. . . . . . . . . . . . . . . . . . . . . . . . 18 3.5: Sentence allocation in Clusters. . . . . . . . . . . . . . . . . . . . . . . . . . 22 3.6: Sentences with their Aggregate Scores. . . . . . . . . . . . . . . .</s>
|
<s>. . . . . . 23 4.1: Number of common sentences in the summaries generated. . . . . . . . . . . 26 4.2: Comparison between F-number, Precision and Recall for Test Article 1. . . . 27 4.3: Comparison between F-number, Precision and Recall for Test Article 2. . . . 27 4.4: Percentage increase in Summary Accuracy. . . . . . . . . . . . . . . . . . . 28 Nomenclature Acronyms / Abbreviation FCM Fuzzy C-Means NLP Natural Language Processing NLTK Natural Language Toolkit PCA Principal Component Analysis PHP Hypertext Preprocessor ROUGE Recall-Oriented Understudy for Gisting Evaluation TF-IDF Term Frequency - Inverse Document Frequency 1 | P a g e Chapter 1 Overview 1.1 Introduction In an era where everyone needs to be updated every second, but no one has the adequate time to read and stay informed. So the world needs real-time automatic text summarization to help them stay informed with the least time consumed. Text summarization can also be used to skim through large Bengali documents and then deciding which one to read if it seems interesting enough. With the advancement in technologies in Bangladesh, the language Bengali is increasingly being used in almost all online platforms, hence the need of Bengali Text Summarization. Here in this proposed system, the objective is to take in articles written in Bengali and convert them into a shorter version, preserving the true meaning of the article. Text Summarization is primarily divided into two major sections: Extractive and Abstractive. In the Extractive approach, the system simply omits the sentences that possess the least weight in the true meaning of the given text, and generate a shorter and more precise version of the passage [11, 12]. In the abstractive approach, a summary of the original text is built keeping the same meaning and the theme intact. The summary built will be much like a one written by a human [10]. The main key points of a text are identified and then understandable sentences are constructed in a concise manner. This paper namely discusses the extractive approach which has been widely used over the years for summarization purpose. News articles have been manually fed into the system which were collected from the national daily “The Daily Prothom Alo”, the system then processes the data before it can be summarized; in preprocessing, the system tokenizes the extract, and removes the stopwords from the extract so that they have no influence in the summary generation. After the removal of stop words, the system stems the words to their root forms, so that all the words generated from a common root is considered as a single unit. The system primarily focuses on Fuzzy C-Means Clustering Algorithm [19] to generate an optimal summary. Along with FCM, TextRank [24] and Aggregate Sentence Scoring [7,12,13,14] has also been implemented to provide a comparative study at the end. For a uniform and accurate evaluation for the comparative study,</s>
|
<s>the system uses the ROUGE [32] scoring method and later calculates F-Measure to provide an understandable illustrative comparative study. Overview 2 | P a g e 1.2 Thesis Orientation Chapter 2 contains the Literature review which talks about previous works done on English and Bengali language. It also contains information about the algorithms used in the system. Chapter 3 talks about the Proposed Model which consist of the System Workflow, Dataset, Preprocessing, Sentence Scoring and the Classifying methodologies used. Chapter 4 shows the Evaluation, Results, Comparative study and Analysis. Chapter 5 finally concludes the paper and talks about the future scopes of FCM in Bengali Text Summarization. 1.3 Motivation Bangladesh is currently going through a digital revolution, every day we are having newer innovations in our country, which in return requires a better support system. Like the rest of the world, Bangladesh also has most of its services delivered digitally to its subscribers. Monthly subscriptions of printed newspapers have declined dramatically in recent times, and more people have become dependant upon digital content. Such change in lifestyle, require us to have a system that will summarize all our documents in fractions of a second so that we are always informed about everything we intend to know, but with sparing much time of our fast-paced life. 1.4 Objective The objective of this paper is to propose a system which would use two new algorithms to generate Bengali text summarization. The two algorithms proposed are as follows: 1) Fuzzy C-Means, 2) TextRank. Since work done on Bengali text summarization is limited, we hope our research will shine a new light on the subject and will open more doors for further research. 1.5 Challenges Implementing a text summarization for the language Bengali was not as straightforward as it is for the more global language English. Numerous summarization projects has been carried out on English, and this lead to the availability of easily accessible packages and libraries which conducts the preprocessing of the test data in seconds. For Bengali, no such library could be found, hence codes had to be written from the very scratch to make the system a success. Another challenge that the system had to overcome was that, in Bengali text words are generally for in their root (dictionary) form; Bengali language syntax tend to alter the words to match it with the context of the sentence which is very different from the English language syntax. In English, the maximum extent to which a sentence can be altered is adding a suffix/ 3 | P a g e prefix and this simple alteration can be easily taken care with the lemmatization method available in the Natural Language Processing Toolkit (NLTK) written in Python. For Bengali, the system had to be installed with a seperate stemming class, which converts every word into their very root format. Proposed Model 4 | P a g e 5 | P a g e Chapter 2 Literature Review Automatic text summarization was</s>
|
<s>first introduced by Luhn [9] in 1958 where he proposed the idea of calculating the word frequencies in sentences and later using those to score sentences, ultimately selecting the highest ranked sentences for the summary. In recent years there have been numerous approaches toward automatic text summarization. Some of the approaches included abstract summarization techniques [10] such as structure-based approach and semantic-based approach. Among the extractive summarization techniques [11,12] cluster-based method, summarization with a neural network, graph-based method, latent semantic analysis (LSA) method, fuzzy logic based, query-based method are some of the most effective and popular ones. Text summarization has been an important application of natural language processing till date and although researchers have explored various methods of text summarization in English language, very few have been done in other natural languages like Bengali. Despite the fact that Bengali text summarization is not as widely popular as English Text summarization, it did get an ample importance for being an emerging field of research in recent decades. First work in the Bengali text summarization field was done by Islam et al. in [1] in the year 2004. They proposed a keyword search-based technique for multiple documents where their corpus-based search engine searches the keyword in multiple documents and then makes a summary of the relevant documents. Later on, it was followed by Uddin and Khan [2] who implemented a summarizer using Java where they used location method, cue method, title, and term frequency to rank the sentences. The first 40% higher ranked sentences from a given text was given as the output. More methods such as TF*IDF, positional value, and sentence length was used by Sarkar [3] for summarizing Bengali news documents. His idea was to generate the main gist of a news article in order to aid the reader with an idea of the whole article. He used 30 Bengali documents and created a reference summary for each for evaluation purpose. Efat et al. in [4] did a similar research using word frequency, cue words, sentence positional value and skeleton of the document for sentence scoring purposes. Their work showed an 83.57% match with human-generated summaries but their system’s accuracy highly depended on the usage of keywords throughout the document. Proposed Model 6 | P a g e Furthermore, a more sophisticated approach was taken by Das and Bandyopadhyay in [5] where they made a topic based opinion summarization system. Their system does two tasks: 1) Finding the theme of the document. 2) Finding the summary of the document. It does the first part by finding out the sentiment information in a document by following a topic-sentiment model which uses clustering model such as K-means and uses a theme relational graph technique for finding out the document level summary. Their theme detection technique generated 83.60% precision while summarization system generated 72.15%. Another work was done by Sarkar in [6] who only used the TF*IDF model along with positional value and sentence length to generate a summary of a single document.</s>
|
<s>He only used a single reference summary generated from LEAD baseline for evaluation which undermines the accuracy of the summarizer. Sarkar further continued his research [3] and used other systems such as System3, Baseline system2 to generate reference summaries for comparison. A more rigorous research was done by Abjuar et al. in [7] where they used the following for word analysis and scoring: frequency, numeric value identification, repeated word distance, and cue words. For sentence analysis and scoring, they used the summation of frequent words, sentence length, sentence position, uniform sentences, imitation sentences, the skeleton of a document, frequent word percentile, prime sentences, aggregate similarities, and final gist analysis. They tested their system with 3 different Bengali texts and compared them with a human-generated summary. Akter et al. in their paper [8] used a different approach in selecting sentences for generating a summary. They used K-means clustering after sentence ranking to choose the best and worst n-sentences for generating the summary. This shone a different light on selecting the sentences for generating the summary as worst scored sentences were not used in generating summaries before for Bengali text summarization. A newer approach was taken by Haque et al. [17] in the sense that they replaced pronoun by a corresponding noun. Furthermore, they always included the first sentence of the document in the summary generated. For the summary generation, they used popular methods for the sentence ranking and used one-third top-ranked sentences in the final summary and evaluated it using F-measure. Out of the handful of brilliant groundbreaking researching, that have been done on Bengali Text Summarization, none have used the concepts of Fuzzy C-Means. However, implementation of the FCM algorithm, have been in talks for English language processing for quite some time now. In [18] Patil et. al. have proposed a text miner which is based on the Fuzzy C-means algorithm. Document clustering is a very important part of text mining and has two parts, namely hard clustering and soft clustering. In hard clustering, a data point belongs to only one cluster. Whereas, in the case of soft clustering, a data point may belong to multiple clusters. Each data point is associated with a membership function, which expresses the degree of its membership to a specific cluster. After the sentence clustering, it has been seen that clustering done using the Fuzzy C-Means algorithm outperforms the traditional K-means algorithm. Fuzzy C-means (FCM) algorithm is a clustering algorithm based on fuzzy logic. There are different types of fuzzy clustering algorithms- fuzzy c-means, fuzzy-k nearest neighbour etc. However fuzzy c-means is the most widely used and popular algorithm. The FCM algorithm was developed by Dunn[19] in 1973 and was later modified by Bezdek [20] in 1981. 7 | P a g e The idea behind the Fuzzy C-Means algorithm was used in various works related to natural language processing including [21] and [22]. Another process of generating text summarization is the highly accepted TextRank Algorithm. TextRank [24] is a graph based unsupervised algorithm</s>
|
<s>derived from the PageRank algorithm [25]. PageRank was primarily introduced to rank the web pages which appear in online search results. In order to rank the web pages, probabilities of a user visiting a page is considered and a score is calculated. With these probability values, a matrix is initialized and the values update iteratively, ultimately creating a set of ranked web pages. TextRank is very similar to PageRank, except for the fact that instead of web pages, sentences are used. The sentences are converted to vector representations and then similarity scores are calculated. Using these scores, a graph is constructed and the top-ranked sentences are selected. In [26] Li et. al. have used the Wikipedia knowledge base to construct a modified TextRank model which extracts keywords for short texts. The main idea of the model was to treat each Wikipedia entry as independent concepts so that the semantic information of a word could be demonstrated in terms of the distribution of the word over the Wikipedia concepts. Upon using the classic TextRank algorithm the keywords extracted would just show the importance of the words over a single article, but in the aforementioned system, the importance of words is affected based on their availability on other Wikipedia articles. The results of their system show that their system performs better than the classical method and the common TF-IDF method. Before any summarization techniques can be applied on any sort of text, a proper preprocessing is required, similar preprocessing techniques were discussed in [7, 8], where they conducted tokenization, stop word removal and stemming; making the text ready for mathematical analysis. Akter et. al. in [8] used the concept of TF*IDF for word scoring and incorporated cue/skeleton words concepts into their sentence scoring mechanism which further improve their system’s accuracy. Works in [7, 12, 13, 14] brought in the idea of sentence scoring, by taking the sentence length into consideration, [7, 13, 14] took it one step further and also altered the sentence priority based on the sentence position within the text extract. Krishnaveni et. al. in [14], implemented the idea of topic scoring, where sentences containing the words that are present in the topic sentence is given a higher priority. Having problems in manipulating data for having an unfavourable number of dimensions is not a new problem in this field of research. Nonetheless, it was quite remarkably handled by Tian et al, in their work [16] where they discussed the method of how Principal Component Analysis (PCA) can be used to reduce the dimension of the data, to make complex data more susceptible for manipulation and visualization. PCA mainly works by taking in multidimensional data, and then analyzes their standardized form and determines the predefined number of Principal Components for the given data, based upon the variation of the data points in any chosen dimension. In this paper, a comparative study and a thorough analysis is performed, between the techniques TextRank, Aggregate Scoring method and FCM integrated with PCA to summarize</s>
|
<s>Bengali Text and news articles by the extractive method into concise and meaningful texts. Proposed Model 8 | P a g e 9 | P a g e Chapter 3 Proposed Model 3.1 System Workflow The system proposed in this paper uses three popular text summarization methodologies to summarize Bengali text documents and provides a comparative study on the outputs generated. Figure 3.1 below represents a detailed experimental workflow of the system proposed in this paper. Proposed Model 10 | P a g e Figure 3.1 Workflow of the system At the very primary stage Bengali text article is fed into the system; once the file is read it undergoes the immersive preprocessing procedures to prepare the text document for scoring. During preprocessing, the system removes the stopwords present, splits the text into paragraphs, sentences and later into words (tokenization). The system also stems the words to its root version so that in circumstance a word is not misinterpreted as different words in case of occurrences at multiple instances in different forms of the root version. After the preprocessing is done the system moves on to the feature extraction part which is the scoring mechanism of the sentences in order to generate the Extractive summary 11 | P a g e of the input text. For the scoring mechanism, the system is equipped with 6 different scoring techniques. The scores used for the system are TF-IDF, Numerical Value, Sentence length, Cue/Skeleton word, Topic Sentence and Sentence Position Scoring. Upon the successful extraction of the features, the system produces a 6-dimensional array upon which Principal Component Analysis (PCA) is performed to reduce the 6-dimensional data to a 2-dimensional data. The 2-dimensional data is then subjected to Fuzzy C-Means (FCM), to classify the sentences into 2 clusters, later the cluster having a greater F-measure value is printed as the output summary. Apart from FCM, the system also uses TextRank and Aggregate Scoring techniques to generate 2 more summaries for each article. TextRank is a form of summary generation that is derived from PageRank where sentence similarity is used to find the most important sentences. Next, the system finds the Aggregate Scores of the sentences and create a third summary using the most important sentences from the set. The F-Measure is calculated for each of the summaries, comparing it with the Gold Summary (Human Generated Summary) that is manually fed into the system, and a comparative study is conducted exhibiting the classifying methodology with the maximum accuracy. 3.2.1 Dataset Dataset has been taken in from an online repository [31], which is basically a group of texts with their human-generated summary. Each instance of data has a full-sized text, along with three human-generated summaries. The human-generated summary is named “Gold Summary”. The Gold Summary also follows the extractive method to summarization, to keep consistency between it and output summary. News articles from different national daily newspapers including “The Daily Prothom Alo” and “Kaler Kantho” have also been considered as texts for text summarization. In</s>
|
<s>Section 3.3 the test articles given are two news articles taken from the Daily Prothom Alo website. 3.2.2 Preprocessing Bengali language processing have not been as widely popular as English language processing, hence Bengali does not have any sort of libraries like NLTK that is readily available for Proposed Model 12 | P a g e English. For making the system to perform in a way that has been proposed initially, a proper preprocessing had to be done. Preprocessing methodologies that have been implemented in this system, tend to make the text readable by the machine, and help the scoring mechanism to perform precisely. Most of the codes used in the preprocessing has been solely written for this system, due to the unavailability of any pre-designed library. A detailed description of the system’s preprocessing methodologies has been discussed below: 1. Stemming In Bengali, a certain root word can be manipulated in multiple ways to make it best suited with the sentence and the context it is used for. For example, the word ‘কাজ’ can be used as ‘কাজজর’, ‘কাজটি’ etc, but all of these words originate from the same root word which is ‘কাজ’, hence to make the system’s scoring mechanism more accurate and relevant, a stemming mechanism is incorporated in preprocessing, which simply converts all the words to their very root version. If the following words are taken as an example,‘কাজজর’, ‘কাজটি’ etc will all be converted to ‘কাজ’. So that each time the words come up, the system’s scoring mechanism will recognise them and treat the words as the same word as the root word. A rule-based generic Bengali stemmer as implemented in [23] has been used which converts a Bengali word into its stemmed form. The following table, Table 3.1 demonstrates what happens when words used in sentences are stemmed into their root version. Table 3.1: Stemming output Original Word Stemmed Word তেজের তেে যুক্তরাজে যুক্তরাে সাজের সাে অজটাবজরর অজটাবর তেজে োে অজটাবজরই অজটাবর কারজে কারে োজসর োস 13 | P a g e 2. Stopword Removal Words that hold any contribution for expressing the meaning of a sentence and have very little meaning themselves are called stop words. Bengali sentences are often filled with numerous stopwords. Bengali language and its grammar are designed as such that one has to use stopword to make it complete. Words such as ‘অবশ্য’, ‘এই’, ‘কজেক’ are merely few of words from the enormous list of stopwords that has been installed in the system. All such words are detected by the system and are removed before the scoring starts. If the stopwords are not removed then they tend to take up of a lot of computational resources and as these words are likely to be repeated, they appear to be scored higher than the actual meaningful words and eventually contribute to generating inaccurate summaries. Table 3.2 below, exhibits the sentence structure upon removal of the stopwords. Table 3.2: Representation of stopword removal Sentences with stopword Sentences after stopword is removed অথচ</s>
|
<s>গে অজটাবজরই তেজের দাে বযাজরেপ্রতে ৭৬ ডোজর উজেতিে। গে অজটাবজরই তেজের দাে বযাজরেপ্রতে ৭৬ ডোজর উজেতিে। শুক্রবার এই তেজের দাে কজেজি ৫ দশ্তেক ৫ শ্োাংশ্। শুক্রবার তেজের দাে কজেজি ৫ দশ্তেক ৫ শ্োাংশ্। 3. Paragraph Splitting An article contains multiple paragraphs and this system is built such that, it separates each paragraph into objects which contains necessary information about the paragraph it’s referencing. Furthermore, each paragraph is processed iteratively to rank its topic and concluding sentences. 4. Sentence Splitting The position of the sentence within a paragraph is also one of the many attributes the system takes into account in order to generate the most accurate summary. A separate sentence class is written, where instances of each sentence are created and the numerical value of each sentence feature is stored within. Proposed Model 14 | P a g e 5. Tokenization Tokenization is the process of splitting each sentence into separate words. In order to check for occurrences of words in the sentences and to increment the scores for any positive matches, tokenization had to be done. 3.3 Feature Extraction 3.3.1 TF-IDF Scoring TF-IDF stands for Term Frequency-Inverse Document Frequency; this score represents the importance or significance of a specific word in the entire document. TF is defined by the number of occurrences of the word w, in a total number of sentences S. TF – IDF(w) = Frequency(w) * log ( 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑆𝑒𝑛𝑡𝑒𝑛𝑐𝑒𝑠𝐹𝑟𝑒𝑞𝑢𝑒𝑛𝑐𝑦(𝑤) ) 3.1 Each sentence in the text then receives a score based on the TF-IDF of each word in the sentence. Score(s1)= ∑ 𝑇𝐹 − 𝐼𝐷𝐹(𝑤𝑛𝑤=0 i) 3.2 After scores for each of the sentences are generated, each score is normalized so as to make the system’s scores compatible with the clustering algorithms. The normalization algorithm simply counts the maximum score of a sentence to be 1, and all other scores are normalized in relation to the maximum score. Normalized Score(s1) = 𝑆𝑐𝑜𝑟𝑒(𝑠1)𝑀𝑎𝑥(𝑆𝑐𝑜𝑟𝑒𝑠(𝑠1)) 3.3 3.3.2 Numeric value based Sentence Scoring A sentence containing any numerical values are generally considered more important as a number can add a lot of value to a summary. Thus, the text is scanned for the presence of the Bengali numerals "০, ১, ২, ৩, ৪, ৫, ৬, ৭, ৮, and ৯"; if a numeral is found, regardless of combinations, the score of the sentence is incremented. Then, the score is calculated using: 15 | P a g e Numerical Score(s2) = 𝑁𝑢𝑚𝑒𝑟𝑎𝑙 𝐶𝑜𝑢𝑛𝑡(𝑠2)𝐿𝑒𝑛𝑔𝑡ℎ(𝑠2)3.4 These scores do not require any normalization as 0≤Numerical Score(s) <1 3.3.3 Sentence Length based Scoring Length based scoring relies on the length of each sentence, where a sentence is scored based on how its length compares to the average length of sentences in the article. This method of scoring relies on the fact that sentences that are too long or too short do not hold a lot of significance. Sentences that are too short usually contain anecdotes, exclamations, or quotes that do not reflect importance. On the other hand, sentences that are too long are considered</s>
|
<s>vague, bringing unwanted information to the reader. Score(s3) = 𝐿𝑒𝑛𝑔𝑡ℎ(𝑠3)𝐴𝑣𝑒𝑟𝑎𝑔𝑒 𝐿𝑒𝑛𝑔𝑡ℎ 𝑜𝑓 𝑆𝑒𝑛𝑡𝑒𝑛𝑐𝑒 𝑖𝑛 𝑡𝑒𝑥𝑡 3.5 After scoring each sentence the values are normalized to meet the algorithm’s required ranges of 0-1. To do this, a piecewise function was used 𝑁𝑜𝑟𝑚𝑎𝑙𝑖𝑧𝑒𝑆𝑐𝑜𝑟𝑒 = { 𝑟𝑎𝑡𝑖𝑜 𝑖𝑓 𝑟𝑎𝑡𝑖𝑜 ≤ 02 − 𝑟𝑎𝑡𝑖𝑜 𝑖𝑓 𝑟𝑎𝑡𝑖𝑜 ≤ 20 𝑟𝑎𝑡𝑖𝑜 > 23.6 3.3.4 Cue/Skeleton Word Scoring The words that itself does not hold, much significance but when used in a sentence; provides a truer picture of the context is called the Cue/Skeleton words. Sentences containing the Cue words such as ‘কারে’, ‘তযজেেু’, ‘অেএব’ etc. are likely to hold a greater importance in holding the gist of the actual text extract. In [9, 14] the cue feature was discussed where sentences containing the words that are in the list of predefined list of cue word are given a higher importance. In the system, all the words in their tokenized form are crossed checked Proposed Model 16 | P a g e with the list of cue words, and if a positive match is found the sentence containing the Cue word is given a higher score. for every cue word i present in sentence j Score(s4) for sentence j = Score(s4) for sentence j + 1 3.7 𝑁𝑜𝑟𝑚𝑎𝑙𝑖𝑧𝑒𝑑 𝑆𝑐𝑜𝑟𝑒(𝑠4) = 𝑆𝑐𝑜𝑟𝑒(𝑠4)𝑀𝑎𝑥(𝑆𝑐𝑜𝑟𝑒𝑠(𝑠4))3.8 3.3.5 Topic Sentence Scoring For any given text extract, the first sentence of the extract, and the first sentence of each subsequent paragraph are supposed to contain the words that are more relevant to the subject of the text and are more like to give the overview of the context. A concept as such was discussed in [17]. In the system, the words present in the Topic Sentences are matched with the other sentences of any given paragraph, and the sentences containing the words of the topic sentence are given a higher priority. This is done by flagging a word that is present in one of the topic sentences, then checking for the flagged word in the other sentences. For example, if the word “আবোওো” exists in the topic sentence, and then also in sentence S, the score is incremented by 1. Score(s5) =Sum(Flagged words) 3.9 𝑁𝑜𝑟𝑚𝑎𝑙𝑖𝑧𝑒𝑑 𝑆𝑐𝑜𝑟𝑒(𝑠5) = 𝑆𝑐𝑜𝑟𝑒(𝑠5)𝑀𝑎𝑥(𝑆𝑐𝑜𝑟𝑒𝑠(𝑠5))3.10 3.3.6 Sentence Position based scoring The first line of a paragraph, which is the topic sentence, usually highlights and sums up what the whole article is about. This is also done by the concluding sentence in the paragraph. So, a scoring system is used where the paragraphs are iterated through, and the topic sentence and concluding sentences are ranked highest. Also, the immediate line after the topic sentence and the line before the conclusion sentence usually contains important information as well. Keeping that in mind the first 10% and last 10% sentences of each paragraph are scored more 17 | P a g e than the rest. The following function shows the selection process where S (I) is the positional sentence scoring of each sentence. 𝑆𝑐𝑜𝑟𝑒(𝑠5) = {𝑆(𝑖) = 1 𝑖𝑓 𝑝𝑜𝑠𝑖𝑡𝑖𝑜𝑛 𝑜𝑓 𝑠𝑒𝑛𝑡𝑒𝑛𝑐𝑒 𝑖 ≤</s>
|
<s>0.1 ∗ 𝑡𝑜𝑡𝑎𝑙 𝑛𝑜. 𝑜𝑓 𝑠𝑒𝑛𝑡𝑒𝑛𝑐𝑒𝑠 𝑖𝑛 𝑎 𝑝𝑎𝑟𝑎𝑔𝑟𝑎𝑝ℎ 𝑆(𝑖) = 1 𝑖𝑓 𝑝𝑜𝑠𝑖𝑡𝑖𝑜𝑛 𝑜𝑓 𝑠𝑒𝑛𝑡𝑒𝑛𝑐𝑒 𝑖 ≥ 0.9 ∗ 𝑡𝑜𝑡𝑎𝑙 𝑛𝑜. 𝑜𝑓 𝑠𝑒𝑛𝑡𝑒𝑛𝑐𝑒𝑠 𝑖𝑛 𝑎 𝑝𝑎𝑟𝑎𝑔𝑟𝑎𝑝ℎ𝑆(𝑖) = 0 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒 3.11 Table 3.3: Score table of the score for test Article 1 Sentence Number TF-IDF Numerical Value-Based Sentence Length Based Cue/Skeleton word Topic Sentence Sentence Position Based 1 0.173 0 0 0 0.2727 1 2 0.2228 0.125 0 0 0.0909 1 3 0.5143 0.5714 1 0.5 0.2727 1 4 0.2831 0.25 0 0 0.3636 1 5 0.697 0 1 0.5 0.3182 1 6 0.4604 0.1667 1 0 0.2727 0 7 0.1995 0 0 0 0.1364 1 8 0.6223 0 1 0 0.8182 1 9 1 0 0 0 1 1 10 0.597 0 1 0 0.1818 1 11 0.7138 0 1 0 0.2727 1 12 0.2002 0 0 0 0.4091 1 13 0.4094 0.2222 0 0 0.0909 1 14 0.4323 0.5455 1 0 0.1818 1 15 0.6274 0.2 1 0 0.0909 0 16 0.3514 0.1111 0 0 0.0455 1 17 0.3239 0.1111 0 0 0.0455 1 18 0.5531 0 1 0 0.5909 1 19 0.7598 0 1 0 0.2727 1 20 0.3419 0 0 0 0.1364 1 21 0.4277 0 0 0.5 0.4091 1 22 0.2534 0 0 0 0 1 23 0.2298 0 0 0 0 0 24 0.1088 0 0 0 0.0455 0 25 0.4223 0 1 0.5 0.1818 1 26 0.7632 0 1 1 0.8636 1 27 0.276 0 0 0.5 0.1364 1 28 0.574 0.1176 1 0 0.0909 0 29 0.2095 0 0 0 0.1364 1 30 0.2175 0 0 0 0.3636 1 31 0.2601 0.1111 0 0 0.0455 1 32 0.1796 0 0 0 0 0 33 0.1929 0 0 0 0.0909 0 34 0.7351 0.375 1 0 0.2273 1 35 0.6635 0 1 0 0.5909 1 Proposed Model 18 | P a g e 3.4 Classifying Methodologies 3.4.1 Fuzzy C-Means Clustering Principal Component Analysis (PCA) The system that has been developed for the Text Summarization, is designed to have 6 features i.e. the data generated is 6 dimensional, now the Fuzzy C-means algorithm implementation this system uses 2-dimensional data; thus the need of Principal Component Analysis. One of the very basic functions of the Principal Component Analysis is that it reduces data dimensions making it feasible for data visualization [15, 16]. The algorithm initially takes in multidimensional data, then it standardizes the data and finally deduces the principal components based on the variation of the data. For the system developed, 6 features/columns were converted into 2 features/columns of data; Principal Component 1 (PC1) and Principal Component 2 (PC2). PC1 demonstrates the direction when there is the most variation in the input dataset, and PC2 demonstrates the same where the second most variation occurs. Upon generation of the 2 Principal Components, Fuzzy C-means algorithm is implemented on the 2-dimensional data, to generate the clusters based on which the summary is determined. Table 3.4 below shows the PC1 and PC2 values generated for each sentence for</s>
|
<s>the test Article 1. Table 3.4: PCA Score table for Test Article 1 Sentence Number Principal Component 1 Principal Component 2 1 -1.12037 -1.04909 2 -1.27738 -0.20211 3 1.830519 2.160945 4 -0.48953 0.00867 5 2.031563 -0.35554 6 0.052385 1.816605 7 -1.32648 -0.84062 8 2.128207 -0.64812 9 2.502514 -1.58377 10 0.777594 0.236368 11 1.265407 0.17756 12 -0.77416 -1.22567 13 -0.7288 0.344891 14 0.698672 2.583186 15 0.141872 2.322483 16 -1.04289 -0.12352 17 -1.11454 -0.13992 18 1.489062 -0.36811 19 1.385257 0.205004 20 -0.95546 -0.75566 21 0.536656 -1.54871 19 | P a g e 22 -1.46139 -0.61565 23 -2.1827 0.413302 24 -2.40611 0.276796 25 1.040501 -0.32662 26 4.023113 -1.54576 27 -0.40909 -1.25374 28 -0.05016 1.921285 29 -1.30042 -0.83465 30 -0.82093 -1.15104 31 -1.28076 -0.17799 32 -2.3135 0.383352 33 -2.09534 0.262795 34 1.469992 1.935296 35 1.776702 -0.30224 Fuzzy C-means Clustering: The term Fuzzy set in mathematics refers to a set in which each element of the set has varying degrees of membership. In the traditional set theory, the membership of elements in a set are expressed in a binary fashion- an element either belongs to the set or it does not. However, in fuzzy set theory, the membership of elements are expressed with the help of membership function and the membership values varying in the interval [0, 1]. The Fuzzy C-means algorithm is a soft computing technique initially developed by Dunn [19]. This algorithm is based on the fuzzy set theory mentioned above. The idea of membership in fuzzy set theory is modified in the case of Fuzzy C-means algorithm and a membership matrix is formed known as the partition matrix, which contains the degree of memberships of elements across different clusters. In this proposed system, the number of clusters specified for clustering is set to be 2. The Fuzzy C-means algorithm runs very similarly to the K-means algorithm. Firstly, the number of clusters required is needed to be specified. Next, an initial partition matrix is created and data points are randomly distributed over the clusters in a binary way. The algorithm converges, when the change between the membership values between two iterations is greater than 𝜀 - the specified error limit or the maximum number of iterations has reached. The primary aim of the FCM is to minimize the objective function as (3.12) before the algorithm converges. J = ∑ ∑ µ𝑖𝑗𝑚‖𝑥𝑖 − 𝑐𝑗‖𝑗=1𝑖=1 3.12 Proposed Model 20 | P a g e Where c is the total number of clusters, n is the total number of data points. 𝜇𝑖𝑗 stands for the degree of membership of xi in the jth cluster and m is any real number greater than 1. The FCM algorithm aims to partition a set of n data points X = {x1,x2,x3 …. xn} into a set of specified different clusters. Fuzzy partitioning is done by the iterative optimization of the objective function (3.12) with the update of the membership values (3.14) and the calculation of the centroid of the clusters as (3.13) [28]. cj = ∑ (µ𝑖𝑗𝑚𝑥𝑖)𝑛𝑖=1∑ (µ𝑖𝑗𝑚)𝑛𝑖=13.13 Where</s>
|
<s>cj is the d-dimension centre of the cluster. µij = ∑ (𝑐𝑘=1‖𝑥𝑖−𝑐𝑗‖‖𝑥𝑖−𝑐𝑘‖𝑚−13.14 The FCM algorithm [27] has the following steps: Initialize U= [uij] matrix randomly, U (0), where U (0) is the initial partition matrix. At k-step: calculate the centers vectors C(k)=[cj] with U(k) with equation (3.13) Update U(k) , U(k+1) with equation (3.14) If || U (k+1) - U (k) ||< ε then STOP; otherwise return to step 2. This system uses the FCM algorithm in [29] for implementation. FCM is a clustering model which was used in this system where it was fetched witch 2-dimensional data generated from the PCA model. The FCM model automatically calculated the optimum number of centres based on the given input data and then it iteratively found out to which centre the data points are the closest to, which ultimately led to creating the clusters. The optimum number of centres is found using the Fuzzy Partition Coefficient (FPC) value, greater the value greater is the accuracy of the number of centres best fitted according to the data which can be seen from the following graph which was generated from Test Article 1. An illustration of the centres with their FPC values is given below in Figure 3.2. 21 | P a g e Figure 3.2. Number of centres and FPC values of FCM Figure 3.2 shows the system’s experiment with multiple centres ranging from 2 to 10. It can be seen that having 2 centres, the greatest FPC value was generated, which means it would give best results if 2 centres are initiated. The main objective of using FCM here was to find out which sentences should be chosen for the system’s summary based on its processed score. Proposed Model 22 | P a g e Figure 3.3. FCM clustering output Table below demonstrates the list of sentences that have been clustered into 2 different clusters, which have generated using the Fuzzy C-Means. Table 3.5: Sentence allocation in Clusters Sentences in Cluster 1 Sentences in Cluster 2 2 0 4 1 5 3 7 6 8 11 9 12 10 15 13 16 14 19 17 21 18 22 20 23 24 26 25 28 27 29 33 30 34 31 3.4.2 TextRank Algorithm TextRank has been a popular summary generator for the English language. Hence it is used to generate another summary from the same article used before for comparison purpose. The 23 | P a g e TextRank code was from a repository in GitHub [30] in which the TextRank algorithm is implemented using PHP. The article was first stemmed and all stop words were removed in order to attain a greater accuracy by the algorithm. The summary generated was output onto a text which was later used for comparison purpose with the other two algorithms used. 3.4.3 Aggregated Sentence Scoring Aggregates Sentence Scoring is a very simple and straightforward approach to generate a summary based on the features obtained from analyzing a given text extract. It</s>
|
<s>is the traditional method of summarization where all the scores of every sentence are added together, and all the scores are ranked based on the cumulative score of the sentences. A summary length ratio relative to the original text is usually predefined, and the extractive summary is computed by selecting the sentences that hold the highest scores among all the sentences. For a clearer understanding, let us assume the system is fed with an original text comprising a total of 20 sentences, and the predefined accepted ratio is given to be 0.4 or 40%. After the aggregate score is computed and the sentences are ranked in a descending order, then only the top 40% is determined to be the summary of the original text; in the particular case, the summary will have a length of 8 sentences. For every feature i in sentence j Aggregate Score(sj) = ∑ 𝑆𝑐𝑜𝑟𝑒(𝑠𝑖)𝑤=0 3.15 Table 3.6: Sentences with their Aggregate Scores for Test Article 1 Sentence Aggregate Score এফএক্সটিএজের তবজেষক েুকোে ওেুেুগা বজেে, ‘তেজের সরবরাে একতদজক বাড়জি, অেযতদজক চাতেদা কেজি—এই দুই কারজে তেজের বাজাজর তবপয যে তেজে আসজি 4.627 ২০১৭ সাজের অজটাবজরর পর এই প্রথে তেজের দাে বযাজরেপ্রতে ৫০ দশ্তেক ৪২ ডোজর তেজে এে 3.858 তকন্তু অতে সরবরাে তেজে শ্ঙ্কা, চাতেদা পজড় যাওো—এসব কারজে এক োজসর েজযয তেজের দাে এেিা কজে তগজি 3.515 তসাতসজেি তজোজরজের পেয গজবষো তবভাজগর প্রযাে োইজকে তেইগ বজেে, িে সপ্তাে যজর দাে তয োজর কেজি, োজে তবতেজোগকারীজদর োতভশ্বাস উজে যাওোর তজাগাড় 3.44 যুক্তরাজে শুক্রবার এক গযােে জ্বাোতের দাে তিে ২ দশ্তেক ৫৮ ডোর, যা এক োস আজগও তিে ২.৮৪ ডোর 3.337 এই পতরতিতেজে তেে উৎপাদেকারী তদশ্গুজো আগােী োজস তভজেোে ওজপক ও সেজযাগী তদশ্গুজোর ববেজকর তদজক োতকজে আজি 3.254 শুক্রবার ২০১৮ সাজের েজযয তেজের দাে সব যতেম্ন ৫৯ ডোজর তেজে আজস 3.16 ইরাজের ওপর তেজষযাজ্ঞা আসজি এই আশ্ঙ্কাে তসৌতদ আরবসে ওজপকভুক্ত তদশ্গুজো তেজের উৎপাদে বাতড়জে তদে 3.144 এসব কারজে ববতশ্বক অথ যেীতের চাতেকা শ্ক্তক্ত জ্বাোতে তেজের বাজার রেরো েওোর 3.104 Proposed Model 24 | P a g e সম্ভাবো তেই তকন্তু যুক্তরাে এরপর ভারে, চীেসে তবশ্ কজেকটি তদশ্জক ইরাে তথজক তেে তকোর তবোে িাড় তদজে বাজাজর তেজের সরবরাে অজেকিা তবজড় যাে 3.033 তেে খােসাংতেষ্ট বযক্তক্তরা আশ্া করজিে, আগােী োজস তভজেোে ওজপক ও সেজযাগী তদশ্গুজোর ববেজক তসৌতদ আরবসে অেযরা তেজের সরবরাে যজথষ্ট োজর কোজব এবাং োজে বাজার তকিুিা সাশ্রেী েজব আর সম্প্রতে তেতে তযভাজব তসৌতদ আরজবর প্রশ্াংসা করজেে, োজে তবতেজোগকারীজদর েজে শ্ঙ্কা, তসৌতদ আরব সম্ভবে উৎপাদে তেেে একিা কোজব ো 2.987 েজব দাে কজে যাওো সজেও োতকযে তপ্রতসজডে তডাোল্ড ট্রাম্প ওজপকসে তসৌতদ আরবজক উৎপাদে ো কোজে চাপ তদজেে 2.779 অেযতদজক ববতশ্বক আতথ যক বাজাজর প্রবৃক্তি তেজে আবারও আশ্ঙ্কা বেতর েজেজি 2.337 তেে তকাম্পাতের তশ্োজরর দাে পজড় যাওোে শুক্রবার ডাও সূচজকর োে ১৭৮ পজেে কজে যাে 1.918 তেজের দাে এভাজব কোর কারজে অজেজকই েেবুি েজে তগজিে 1.912 এক োস আজগই পয যজবক্ষজকরা তদে গুেতিজেে, তেজের দাে কজব বযাজরেপ্রতে ১০০ ডোজর উেজব 1.9 অথচ গে অজটাবজরই তেজের দাে বযাজরেপ্রতে ৭৬ ডোজর উজেতিে 1.897 তসাতসজেি তজোজরজের তেসাব েজে, চেতে প্রাতিজক বড় বড় েেতবজের ক্ষতের পতরোে ৭৭০ তকাটি ডোর িাতড়জে তগজি 1.783 25 | P a</s>
|
<s>g e Chapter 4 Evaluation and Results 4.1 Rouge Recall-Oriented Understudy for Gisting Evaluation (ROUGE) [32] is a metric system to compare machine generated summaries or translation against a reference summaries a.k.a Gold Summary. ROUGE tends to generate a metric value that determines the accuracy of the generated summary by generating a ratio of overlapping sentences. For the evaluation of the system’s summary generated from the 3 different methods, the ROUGE-2 measure was used. What it does is, it compares the summary generated by the system with the reference summary (Human-produced). It has two criteria for evaluation: 1) Recall and 2) Precision. Recall finds out if the system summary has sentences which match with the reference summary or not. It uses the following formula for computation: Recall = 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑜𝑣𝑒𝑟𝑙𝑎𝑝𝑝𝑖𝑛𝑔 𝑆𝑒𝑛𝑡𝑒𝑛𝑐𝑒𝑠𝑇𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑆𝑒𝑛𝑡𝑒𝑛𝑐𝑒𝑠 𝑖𝑛 𝑟𝑒𝑓𝑒𝑟𝑒𝑛𝑐𝑒 𝑠𝑢𝑚𝑚𝑎𝑟𝑦 4.1 A perfect score of 1 would mean the system summary matched fully with the reference summary. However the system summary might have useless and unnecessary information in addition to the information present in the reference summary, and still, recall would give a good score. A better way to see if in fact only the relevant information is present in the system summary or not is by using precision measure. Precision measure finds out how much of the reference summary is actually present in the system summary by the following formula: Precision = 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑜𝑣𝑒𝑟𝑙𝑎𝑝𝑝𝑖𝑛𝑔 𝑆𝑒𝑛𝑡𝑒𝑛𝑐𝑒𝑠𝑇𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑆𝑒𝑛𝑡𝑒𝑛𝑐𝑒𝑠 𝑖𝑛 𝑠𝑦𝑠𝑡𝑒𝑚 𝑠𝑢𝑚𝑚𝑎𝑟𝑦 4.2 It simply finds out if the system summary is indeed relevant and concise or not. Evaluation and Results 26 | P a g e Lastly, the F1 measure which is a measure of a test’s accuracy is calculated using both recall and precision values. A score of 0 means the test yielded the worst result while 1 stands for the best. According to the system, a score 1 means the system summary matched exactly with the gold summary while 0 means the system summary is totally inaccurate. The F1 measure is calculated by the following formula: F1 = 2 * 𝑝𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛∗𝑟𝑒𝑐𝑎𝑙𝑙𝑝𝑟𝑒𝑐𝑖𝑠𝑜𝑛+𝑟𝑒𝑐𝑎𝑙𝑙4.3 4.2 Comparative Study and Analysis As represented by Table 4.1 and visualized by Figure 4.1 above, on both test cases, FCM yields a higher number of common sentences. This means that our FCM algorithm returns sentences that have a higher probability of carrying more importance from the input article. TextRank Aggregate Scoring FCM Article 1 10 10 12 Article 2 7 9 10 Table 4.1: Number of common sentences in the summaries generated Figure 4.1: Bar chart comparing the number of common sentences in the summaries 27 | P a g e Table 4.2: Comparison between F-number, Precision, and Recall for Test Article 1 TextRank Aggregate Scoring FCM F1 measure 0.625 0.5882352941 0.6857142857 Precision 0.7142857143 0.625 0.7058823529 Recall 0.5555555556 0.5555555556 0.6666666667 Figure 4.2: Bar Chart for Test Article 1 When judging the accuracy of summaries, we can look at two factors, the F1 Measure and Common Sentences. For the first article, we notice a higher F1 measure</s>
|
<s>for FCM summary than both Aggregate scoring and TextRank. This is backed up by the fact that FCM summary generates more common sentences than both TextRank and Aggregate scoring. FCM generated 2 more relevant sentences than TextRank and Aggregate Scoring, which results in a higher F1 measure. Table 4.3: Comparison between F-number, Precision and Recall for Test Article 2 TextRank Aggregate Scoring FCM F1 measure 0.35 0.5 0.606060606 Precision 0.304347826 0.473684210 0.625 Recall 0.411764705 0.529411764 0.588235294 Evaluation and Results 28 | P a g e Figure 4.3: Bar chart for Test Article 2 In case of the second article, FCM consistently returns a higher F1 Measure, meaning that the summary generated by the FCM algorithm is more accurate and better retains information from the initial article. This is backed up by the number of sentences common between FCM summary and Gold summary being higher in both cases. Table 4.4: Percentage increase in Summary Accuracy Percentage Increase from TextRank Percentage Increase from Aggregate Scoring Article 9.264 15.303 Article 53.565 19.178 4.3 Test Articles Article 1: ‘৫০ ডলারে নেরে এল নেরলে দাে’ অপতরজশ্াতযে তেজের দাে আরও এক দফা কেে। গেকাে যুক্তরাজে তেজের দাে ৭ শ্োাংশ্ কজেজি। ২০১৭ সাজের অজটাবজরর পর এই প্রথে তেজের দাে বযাজরেপ্রতে ৫০ দশ্তেক ৪২ ডোজর তেজে এে। 29 | P a g e অথচ গে অজটাবজরই তেজের দাে বযাজরেপ্রতে ৭৬ ডোজর উজেতিে। তকন্তু অতে সরবরাে তেজে শ্ঙ্কা, চাতেদা পজড় যাওো—এসব কারজে এক োজসর েজযয তেজের দাে এেিা কজে তগজি। এক োস আজগই পয যজবক্ষজকরা তদে গুেতিজেে, তেজের দাে কজব বযাজরেপ্রতে ১০০ ডোজর উেজব। এখে তেজের এই পড়তে দাে তদজখ োাঁজদর কপাজে তচিার ভাাঁজ পজড়জি। তসাতসজেি তজোজরজের পেয গজবষো তবভাজগর প্রযাে োইজকে তেইগ বজেে, িে সপ্তাে যজর দাে তয োজর কেজি, োজে তবতেজোগকারীজদর োতভশ্বাস উজে যাওোর তজাগাড়। তেে খােসাংতেষ্ট বযক্তক্তরা আশ্া করজিে, আগােী োজস তভজেোে ওজপক ও সেজযাগী তদশ্গুজোর ববেজক তসৌতদ আরবসে অেযরা তেজের সরবরাে যজথষ্ট োজর কোজব এবাং োজে বাজার তকিুিা সাশ্রেী েজব। েজব দাে কজে যাওো সজেও োতকযে তপ্রতসজডে তডাোল্ড ট্রাম্প ওজপকসে তসৌতদ আরবজক উৎপাদে ো কোজে চাপ তদজেে। আর সম্প্রতে তেতে তযভাজব তসৌতদ আরজবর প্রশ্াংসা করজেে, োজে তবতেজোগকারীজদর েজে শ্ঙ্কা, তসৌতদ আরব সম্ভবে উৎপাদে তেেে একিা কোজব ো। তেজের দাজের ববতশ্বক োেদণ্ড েজে অপতরজশ্াতযে তেে তেজের দাে। শুক্রবার এই তেজের দাে কজেজি ৫ দশ্তেক ৫ শ্োাংশ্। শুক্রবার ২০১৮ সাজের েজযয তেজের দাে সব যতেম্ন ৫৯ ডোজর তেজে আজস। তেে তকাম্পাতের তশ্োজরর দাে পজড় যাওোে শুক্রবার ডাও সূচজকর োে ১৭৮ পজেে কজে যাে। তশ্ভরে ও কজোজকাতফতেপজসর তশ্োজরর দাে ৩ শ্োাংশ্ পজড় যাে। আর তশ্ে উৎপাদক ইওক্তজ তরজসাজস যর দাে পজড়জি ৫ শ্োাংশ্। ইরাজের ওপর তেজষযাজ্ঞা আসজি এই আশ্ঙ্কাে তসৌতদ আরবসে ওজপকভুক্ত তদশ্গুজো তেজের উৎপাদে বাতড়জে তদে। তকন্তু যুক্তরাে এরপর ভারে, চীেসে তবশ্ কজেকটি তদশ্জক ইরাে তথজক তেে তকোর তবোে িাড় তদজে বাজাজর তেজের সরবরাে অজেকিা তবজড় যাে। এজে বাজাজর তেজের দাে ক্রজেই কেজে কেজে এ জােগাে এজস দা াঁতড়জেজি। অেযতদজক ববতশ্বক আতথ যক বাজাজর প্রবৃক্তি তেজে আবারও আশ্ঙ্কা বেতর েজেজি। অথ যেীতেতবজদরা ইতেেজযয প্রবৃক্তির প্রাক্কেে কে কজর যরজিে। তবজশ্বর েৃেীে ও চেুথ য অথ যেীতে ইতেেজযয সাংকুতচে েজে। চীজের প্রবৃক্তিও কেজি। এসব কারজে ববতশ্বক অথ যেীতের চাতেকা</s>
|
<s>শ্ক্তক্ত জ্বাোতে তেজের বাজার রেরো েওোর সম্ভাবো তেই। এফএক্সটিএজের তবজেষক েুকোে ওেুেুগা বজেে, ‘তেজের সরবরাে একতদজক বাড়জি, অেযতদজক চাতেদা কেজি—এই দুই কারজে তেজের বাজাজর তবপয যে তেজে আসজি।’ তেজের দাে এভাজব কোর কারজে অজেজকই েেবুি েজে তগজিে। তসাতসজেি তজোজরজের তেসাব েজে, চেতে প্রাতিজক বড় বড় েেতবজের ক্ষতের পতরোে ৭৭০ তকাটি ডোর িাতড়জে তগজি। পজেযর বাজাজরর পতরতিতেও েোশ্াজেক। েজব তেজের পড়তে দাে তভাগযপেয তক্রোজদর জেয আশ্ীব যাদ েজে এজসজি। সােজে বড়তদে, োর আজগ ২২ েজভম্বর থযাাংকসতগতভাং তডও পাতেে েজেজি। এ উপেজক্ষ োেুষ তবড়াজে যাে। তেজের দাে কে থাকাে োেুজষর চোজফরা তবজড়জি। যুক্তরাজে শুক্রবার এক গযােে জ্বাোতের দাে তিে ২ দশ্তেক ৫৮ ডোর, যা এক োস আজগও তিে ২.৮৪ ডোর। এই পতরতিতেজে তেে উৎপাদেকারী তদশ্গুজো আগােী োজস তভজেোে ওজপক ও সেজযাগী তদশ্গুজোর ববেজকর তদজক োতকজে আজি। Article 2: ‘শিক্ষাশিপ্লরি েশেয়া চীে’ তবজশ্বর অেযেে অথ যনেতেক পরাক্রেশ্ােী রাে চীে। সােতরক শ্ক্তক্ত ও প্রতেরক্ষা, েেুে েেুে েথয ও তযাগাজযাগ প্রযুক্তক্তর উপকরে উদ্ভাবে এবাং উন্নেজে তদশ্টি যজথষ্ট এতগজে। তসই েুেোে তশ্ক্ষাজক্ষজে তযে এতগজে তযজে পাজরতে। যুক্তরাজজযর সাপ্তাতেক প্রকাশ্ো িাইেস োোর এডুজকশ্ে সােতেকীর সব যজশ্ষ জতরজপ তবজশ্বর শ্ীষ য ১০টি তবশ্বতবদযােজের েজযয চীজের তকাজো তবশ্বতবদযােজের োে তেই। তবষেটি োজদর ভাতবজে েুজেজি। অবশ্য সাাংোই র ্যাক্তঙ্কাংজে শ্ীষ য ৫০০ তবশ্বতবদযােজের েজযয চীজেরই আজি ৪৫টি তবশ্বতবদযােে। এজে অবশ্য োরা সন্তুষ্ট েে। তশ্ক্ষাজক্ষজে তবপ্লব ঘটিজে শ্ীষ যিাে দখে করজে চীে এখে েতরো। আর এ জেয তদশ্টির োেকরা তসাংেুো তবশ্বতবদযােে ও তপতকাং তবশ্বতবদযােজের ওপর আরও তবতশ্ তজার তদজেজি চীো কেৃযপক্ষ। Evaluation and Results 30 | P a g e ১৯১১ সাজে তবইক্তজাংজে প্রতেটিে েে তসাংেুো তবশ্বতবদযােে। শ্োতযক বিজরর পুজরাজো এই তবশ্বতবদযােে এখে গজবষো, তবজ্ঞাে, প্রযুক্তক্ত, প্রজকৌশ্ে ও গতেে তবষজে চীোজদর গজব যর প্রেীক। পক্তিো গজবষোতভতিক তবশ্বতবদযােেগুজোর আদজে পতরচাতেে েজে চীজের তসাংেুো তবশ্বতবদযােে ও তপতকাং তবশ্বতবদযােে। এই দুটি তবশ্বতবদযােে পরস্পর প্রতেজবশ্ী ও প্রতেজযাগী, যা তকো চীজের অক্সজফাডয ও তকেতেজ তেজসজব খযাে। তসাংেুো েজে প্রচতেে ও বাস্তবযেী তবশ্বতবদযােে। এই তবশ্বতবদযােজে পড়াজশ্াো কজরজিে চীজের বেযোে তপ্রতসজডে তস তচে তপাং, সাজবক তপ্রতসজডে েু ক্তজেোওসে তবখযাে অজেজকই। আর তপতকাং তবশ্বতবদযােে েজে তদশ্টির কতব, দাশ্ যতেক ও তবপ্লবীজদর েীথ যিাে। চীজের সাজবক শ্ীষ য তেো োও তসেুাং এই তবশ্বতবদযােে গজবষো কজরতিজেে। ১৯৮৯ সাজে তেজেেআেজেে স্কোজর তবজক্ষাজভ তবশ্বতবদযােেটি অগ্রেী ভূতেকা রাজখ। েন্ডেতভতিক সােতেকী দয ইজকােতেজের প্রতেজবদে বেজি, ১৯৯৫ সাে তথজক চীো তকন্দ্রীে সরকার তদশ্টির তবশ্বতবদযােেগুজো তবজশ্বর শ্ীষ য তবশ্বতবদযােজে উন্নীে করজে োখ োখ ডোর বযে অবযােে তরজখজি। এর আওোে প্রথে ২১১টি প্রকল্প োজে তেওো েজেজি। একতবাংশ্ শ্োব্দীর চযাজেঞ্জ তোকাতবোে প্রাে ১০০টি প্রতেিােজক প্রস্তুে করা েজেজি। ২০১৫ সাে তথজক চােু করা েজেজি ডবে ফােয ক্লাস প্লযাে প্রকল্প। এর েক্ষয দ্রেুেে সেজে তবশ্বতবদযােেগুজো তথজক একটি তবশ্বতবদযােেজক তবশ্বোজে পতরেে করা ও প্রতেিাজের পতরসর বাড়াজো। তযজকাজো তকিুর তপিজে অথ য েজে েূে চাতেকা শ্ক্তক্ত। তসই অথ য খরচ করজে প্রস্তুে চীে। অথ যােে প্রক্তক্রো তবশ্বতবদযােেগুজোজক উৎকৃষ্ট োজের গজবষোে অেুপ্রাতেে কজর। চীো তবশ্বতবদযােজের একাজডতেক গজবষোে তেজোক্তজে বযক্তক্তজদর যজথষ্ট প্রজোদোরও বযবিা রজেজি। প্রযুক্তক্ত ও প্রতেজযাতগোতেভযর তবজশ্বর তবতভন্ন তদজশ্র সরকারও েীতেতেয যারজে পতরবেযে তেজে আসজি। তবশ্বোজের তবশ্বতবদযােে প্রতেিা, উন্নে গজবষো, র ্যাক্তঙ্কাংজে অিভুযক্তক্ত ও অগ্রগতের জেয তশ্ক্ষা খাজে চীে িাড়াও ভারে, তসঙ্গাপুর, দতক্ষে তকাতরো, োইওোে, ফ্রান্স, জাে যাতে তবপুে অথ য বযে করজি। প্রতেজবশ্ী তদশ্ ভারে োজদর ২০টি তবশ্বতবদযােেজক তবশ্বোজে তেওোর তঘাষো তদজেজি। এেেতক</s>
|
<s>োইজজতরোর েজো তদশ্ ২০২০ সাজের েজযয োজদর অিে দুটি তবশ্বতবদযােেজক তবজশ্বর শ্ীষ য ২০০টির েজযয অিভুযক্তক্তর েক্ষযোো তেয যারে কজরজি। তসাংেুোর তেযাবী তশ্ক্ষাথীরা তসরা গজবষক েজে তদজশ্র উন্নেজে কাজ কজরে তকাংবা তদজশ্র েজে তবজদজশ্ গজবষোে তেযুক্ত েে। ২০১৭ সাজে তসাংেুো তবশ্বতবদযােে ১ োজার ৩৮৫ জেজক ডটজরি উপাতয তদজেজি। এজকই সেে যুক্তরাজের েযাসাচুজসিস ইেতেটিউি অব তিকজোেক্তজজে (এেআইটি) ৬৫৪ জেজক ডটজরি তদওো েে। অবশ্য এই সাংখযা তসাংেুো তবশ্বতবদযােজের সাফজেযর প্রযাে কারে েে। তসাংেুো তবশ্বতবদযাজের ভাইস তচোরেযাে ইোাং তবে বজেে, ‘তসাংেুোর সবজচজে গুরুত্বপূে য উন্নেে তিে ১৯৭৮ সাজে, যখে তডাং ক্তজোওতপাং (প্রোে রাজেীতেক) বজেে, চীে তবপুেসাংখযক তশ্ক্ষাথী তবজদজশ্ পাোজব।’ তেতে আরও বজেে, ‘১০ োজার তশ্ক্ষাথী তবজদজশ্ পাোজো প্রজোজে। আোজদর ববজ্ঞাতেক তশ্ক্ষার স্তর উন্নে করার এিাই েজে অেযেে প্রযাে পথ। 4.4 Output Summaries Gold Summary for Test Article 1 অপতরজশ্াতযে তেজের দাে আরও এক দফা কেে। ২০১৭ সাজের অজটাবজরর পর এই প্রথে তেজের দাে বযাজরেপ্রতে ৫০ দশ্তেক ৪২ ডোজর তেজে এে। অথচ গে অজটাবজরই তেজের দাে বযাজরেপ্রতে ৭৬ ডোজর উজেতিে। তকন্তু অতে সরবরাে তেজে শ্ঙ্কা, চাতেদা পজড় যাওো—এসব কারজে এক োজসর েজযয তেজের দাে এেিা কজে তগজি। এক োস আজগই পয যজবক্ষজকরা তদে গুেতিজেে, তেজের দাে কজব বযাজরেপ্রতে ১০০ ডোজর উেজব। তেজের দাজের ববতশ্বক োেদণ্ড েজে অপতরজশ্াতযে তেে তেজের দাে। শুক্রবার এই তেজের দাে কজেজি ৫ দশ্তেক ৫ শ্োাংশ্। শুক্রবার ২০১৮ সাজের েজযয তেজের দাে সব যতেম্ন ৫৯ ডোজর তেজে আজস। তেে তকাম্পাতের তশ্োজরর দাে পজড় যাওোে শুক্রবার ডাও সূচজকর োে ১৭৮ পজেে কজে যাে। ইরাজের ওপর তেজষযাজ্ঞা আসজি এই আশ্ঙ্কাে তসৌতদ আরবসে ওজপকভুক্ত তদশ্গুজো তেজের উৎপাদে বাতড়জে তদে। তকন্তু যুক্তরাে এরপর ভারে, চীেসে তবশ্ কজেকটি তদশ্জক ইরাে তথজক তেে তকোর তবোে িাড় তদজে বাজাজর তেজের সরবরাে অজেকিা তবজড় যাে। এজে বাজাজর তেজের দাে ক্রজেই কেজে কেজে এ জােগাে এজস দা াঁতড়জেজি। অেযতদজক ববতশ্বক 31 | P a g e আতথ যক বাজাজর প্রবৃক্তি তেজে আবারও আশ্ঙ্কা বেতর েজেজি। েজব তেজের পড়তে দাে তভাগযপেয তক্রোজদর জেয আশ্ীব যাদ েজে এজসজি। এফএক্সটিএজের তবজেষক েুকোে ওেুেুগা বজেে, ‘তেজের সরবরাে একতদজক বাড়জি, অেযতদজক চাতেদা কেজি—এই দুই কারজে তেজের বাজাজর তবপয যে তেজে আসজি। তসাতসজেি তজোজরজের তেসাব েজে, চেতে প্রাতিজক বড় বড় েেতবজের ক্ষতের পতরোে ৭৭০ তকাটি ডোর িাতড়জে তগজি। যুক্তরাজে শুক্রবার এক গযােে জ্বাোতের দাে তিে ২ দশ্তেক ৫৮ ডোর, যা এক োস আজগও তিে ২.৮৪ ডোর। এই পতরতিতেজে তেে উৎপাদেকারী তদশ্গুজো আগােী োজস তভজেোে ওজপক ও সেজযাগী তদশ্গুজোর ববেজকর তদজক োতকজে আজি। Summary Generated by Fuzzy C-Means (Article 1) ২০১৭ সাজের অজটাবজরর পর এই প্রথে তেজের দাে বযাজরেপ্রতে ৫০ দশ্তেক ৪২ ডোজর তেজে এে। তকন্তু অতে সরবরাে তেজে শ্ঙ্কা, চাতেদা পজড় যাওো—এসব কারজে এক োজসর েজযয তেজের দাে এেিা কজে তগজি। এক োস আজগই পয যজবক্ষজকরা তদে গুেতিজেে, তেজের দাে কজব বযাজরেপ্রতে ১০০ ডোজর উেজব।জসাতসজেি তজোজরজের পেয গজবষো তবভাজগর প্রযাে োইজকে তেইগ বজেে, িে সপ্তাে যজর দাে তয োজর কেজি, োজে তবতেজোগকারীজদর োতভশ্বাস উজে যাওোর তজাগাড়।জেে খােসাংতেষ্ট বযক্তক্তরা আশ্া করজিে, আগােী োজস তভজেোে ওজপক ও সেজযাগী তদশ্গুজোর ববেজক তসৌতদ আরবসে অেযরা তেজের সরবরাে যজথষ্ট োজর কোজব এবাং োজে বাজার তকিুিা সাশ্রেী েজব। েজব দাে কজে যাওো সজেও োতকযে তপ্রতসজডে তডাোল্ড ট্রাম্প ওজপকসে তসৌতদ আরবজক উৎপাদে ো কোজে চাপ তদজেে। আর সম্প্রতে তেতে তযভাজব তসৌতদ আরজবর প্রশ্াংসা</s>
|
<s>করজেে, োজে তবতেজোগকারীজদর েজে শ্ঙ্কা, তসৌতদ আরব সম্ভবে উৎপাদে তেেে একিা কোজব ো। শুক্রবার ২০১৮ সাজের েজযয তেজের দাে সব যতেম্ন ৫৯ ডোজর তেজে আজস। তেে তকাম্পাতের তশ্োজরর দাে পজড় যাওোে শুক্রবার ডাও সূচজকর োে ১৭৮ পজেে কজে যাে।ইরাজের ওপর তেজষযাজ্ঞা আসজি এই আশ্ঙ্কাে তসৌতদ আরবসে ওজপকভুক্ত তদশ্গুজো তেজের উৎপাদে বাতড়জে তদে। তকন্তু যুক্তরাে এরপর ভারে, চীেসে তবশ্ কজেকটি তদশ্জক ইরাে তথজক তেে তকোর তবোে িাড় তদজে বাজাজর তেজের সরবরাে অজেকিা তবজড় যাে।অেযতদজক ববতশ্বক আতথ যক বাজাজর প্রবৃক্তি তেজে আবারও আশ্ঙ্কা বেতর েজেজি। এসব কারজে ববতশ্বক অথ যেীতের চাতেকা শ্ক্তক্ত জ্বাোতে তেজের বাজার রেরো েওোর সম্ভাবো তেই। এফএক্সটিএজের তবজেষক েুকোে ওেুেুগা বজেে, ‘তেজের সরবরাে একতদজক বাড়জি, অেযতদজক চাতেদা কেজি—এই দুই কারজে তেজের বাজাজর তবপয যে তেজে আসজি। তসাতসজেি তজোজরজের তেসাব েজে, চেতে প্রাতিজক বড় বড় েেতবজের ক্ষতের পতরোে ৭৭০ তকাটি ডোর িাতড়জে তগজি। যুক্তরাজে শুক্রবার এক গযােে জ্বাোতের দাে তিে ২ দশ্তেক ৫৮ ডোর, যা এক োস আজগও তিে ২.৮৪ ডোর।এই পতরতিতেজে তেে উৎপাদেকারী তদশ্গুজো আগােী োজস তভজেোে ওজপক ও সেজযাগী তদশ্গুজোর ববেজকর তদজক োতকজে আজি। Summary Generated by TextRank Algorithm (Article 1) অপতরজশ্াতযে তেজের দাে আরও এক দফা কেে। গেকাে যুক্তরাজে তেজের দাে ৭ শ্োাংশ্ কজেজি। ২০১৭ সাজের অজটাবজরর পর এই প্রথে তেজের দাে বযাজরেপ্রতে ৫০ দশ্তেক ৪২ ডোজর তেজে এে। অথচ গে অজটাবজরই তেজের দাে বযাজরেপ্রতে ৭৬ ডোজর উজেতিে। তকন্তু অতে সরবরাে তেজে শ্ঙ্কা, চাতেদা পজড় যাওো—এসব কারজে এক োজসর েজযয তেজের দাে এেিা কজে তগজি। এক োস আজগই পয যজবক্ষজকরা তদে গুেতিজেে, তেজের দাে কজব বযাজরেপ্রতে ১০০ ডোজর উেজব। এখে তেজের এই পড়তে দাে তদজখ োাঁজদর কপাজে তচিার ভাাঁজ পজড়জি। তেজের দাজের ববতশ্বক োেদণ্ড েজে অপতরজশ্াতযে তেে তেজের দাে। শুক্রবার এই তেজের দাে কজেজি ৫ দশ্তেক ৫ শ্োাংশ্। তশ্ভরে ও কজোজকাতফতেপজসর তশ্োজরর দাে ৩ শ্োাংশ্ পজড় যাে। তকন্তু যুক্তরাে এরপর ভারে, চীেসে তবশ্ কজেকটি তদশ্জক ইরাে তথজক তেে তকোর তবোে িাড় তদজে বাজাজর তেজের সরবরাে অজেকিা তবজড় যাে। এজে বাজাজর তেজের দাে ক্রজেই কেজে কেজে এ জােগাে এজস দা াঁতড়জেজি। এফএক্সটিএজের তবজেষক েুকোে ওেুেুগা বজেে, ‘তেজের সরবরাে একতদজক বাড়জি, অেযতদজক চাতেদা কেজি—এই দুই কারজে তেজের বাজাজর তবপয যে তেজে আসজি.’ তেজের দাে এভাজব কোর কারজে অজেজকই েেবুি েজে তগজিে। েজব তেজের পড়তে দাে তভাগযপেয তক্রোজদর জেয আশ্ীব যাদ েজে এজসজি। Evaluation and Results 32 | P a g e Aggregated Summary for Article 1 ২০১৭ সাজের অজটাবজরর পর এই প্রথে তেজের দাে বযাজরেপ্রতে ৫০ দশ্তেক ৪২ ডোজর তেজে এে। তকন্তু অতে সরবরাে তেজে শ্ঙ্কা, চাতেদা পজড় যাওো—এসব কারজে এক োজসর েজযয তেজের দাে এেিা কজে তগজি।জসাতসজেি তজোজরজের পেয গজবষো তবভাজগর প্রযাে োইজকে তেইগ বজেে, িে সপ্তাে যজর দাে তয োজর কেজি, োজে তবতেজোগকারীজদর োতভশ্বাস উজে যাওোর তজাগাড়।জেে খােসাংতেষ্ট বযক্তক্তরা আশ্া করজিে, আগােী োজস তভজেোে ওজপক ও সেজযাগী তদশ্গুজোর ববেজক তসৌতদ আরবসে অেযরা তেজের সরবরাে যজথষ্ট োজর কোজব এবাং োজে বাজার তকিুিা সাশ্রেী েজব। েজব দাে কজে যাওো সজেও োতকযে তপ্রতসজডে তডাোল্ড ট্রাম্প ওজপকসে তসৌতদ আরবজক উৎপাদে ো কোজে চাপ তদজেে। আর সম্প্রতে তেতে তযভাজব তসৌতদ আরজবর প্রশ্াংসা করজেে, োজে তবতেজোগকারীজদর েজে শ্ঙ্কা, তসৌতদ আরব সম্ভবে উৎপাদে তেেে একিা কোজব ো। শুক্রবার ২০১৮ সাজের েজযয তেজের দাে সব যতেম্ন ৫৯ ডোজর তেজে আজস। তেে তকাম্পাতের তশ্োজরর দাে পজড়</s>
|
<s>যাওোে শুক্রবার ডাও সূচজকর োে ১৭৮ পজেে কজে যাে।ইরাজের ওপর তেজষযাজ্ঞা আসজি এই আশ্ঙ্কাে তসৌতদ আরবসে ওজপকভুক্ত তদশ্গুজো তেজের উৎপাদে বাতড়জে তদে। তকন্তু যুক্তরাে এরপর ভারে, চীেসে তবশ্ কজেকটি তদশ্জক ইরাে তথজক তেে তকোর তবোে িাড় তদজে বাজাজর তেজের সরবরাে অজেকিা তবজড় যাে।অেযতদজক ববতশ্বক আতথ যক বাজাজর প্রবৃক্তি তেজে আবারও আশ্ঙ্কা বেতর েজেজি। এসব কারজে ববতশ্বক অথ যেীতের চাতেকা শ্ক্তক্ত জ্বাোতে তেজের বাজার রেরো েওোর সম্ভাবো তেই। এফএক্সটিএজের তবজেষক েুকোে ওেুেুগা বজেে, ‘তেজের সরবরাে একতদজক বাড়জি, অেযতদজক চাতেদা কেজি—এই দুই কারজে তেজের বাজাজর তবপয যে তেজে আসজি।’ তেজের দাে এভাজব কোর কারজে অজেজকই েেবুি েজে তগজিে। যুক্তরাজে শুক্রবার এক গযােে জ্বাোতের দাে তিে ২ দশ্তেক ৫৮ ডোর, যা এক োস আজগও তিে ২.৮৪ ডোর।এই পতরতিতেজে তেে উৎপাদেকারী তদশ্গুজো আগােী োজস তভজেোে ওজপক ও সেজযাগী তদশ্গুজোর ববেজকর তদজক োতকজে আজি। Gold Summary Article 2 তবজশ্বর অেযেে অথ যনেতেক পরাক্রেশ্ােী রাে চীে। সােতরক শ্ক্তক্ত ও প্রতেরক্ষা, েেুে েেুে েথয ও তযাগাজযাগ প্রযুক্তক্তর উপকরে উদ্ভাবে এবাং উন্নেজে তদশ্টি যজথষ্ট এতগজে। তসই েুেোে তশ্ক্ষাজক্ষজে তযে এতগজে তযজে পাজরতে।যুক্তরাজজযর সাপ্তাতেক প্রকাশ্ো িাইেস োোর এডুজকশ্ে সােতেকীর সব যজশ্ষ জতরজপ তবজশ্বর শ্ীষ য ১০টি তবশ্বতবদযােজের েজযয চীজের তকাজো তবশ্বতবদযােজের োে তেই। অবশ্য সাাংোই র ্যাক্তঙ্কাংজে শ্ীষ য ৫০০ তবশ্বতবদযােজের েজযয চীজেরই আজি ৪৫টি তবশ্বতবদযােে। আর এ জেয তদশ্টির োেকরা তসাংেুো তবশ্বতবদযােে ও তপতকাং তবশ্বতবদযােজের ওপর আরও তবতশ্ তজার তদজেজি চীো কেৃযপক্ষ।পক্তিো গজবষোতভতিক তবশ্বতবদযােেগুজোর আদজে পতরচাতেে েজে চীজের তসাংেুো তবশ্বতবদযােে ও তপতকাং তবশ্বতবদযােে। এই দুটি তবশ্বতবদযােে পরস্পর প্রতেজবশ্ী ও প্রতেজযাগী, যা তকো চীজের অক্সজফাডয ও তকেতেজ তেজসজব খযাে। েন্ডেতভতিক সােতেকী দয ইজকােতেজের প্রতেজবদে বেজি, ১৯৯৫ সাে তথজক চীো তকন্দ্রীে সরকার তদশ্টির তবশ্বতবদযােেগুজো তবজশ্বর শ্ীষ য তবশ্বতবদযােজে উন্নীে করজে োখ োখ ডোর বযে অবযােে তরজখজি। ২০১৫ সাে তথজক চােু করা েজেজি ডবে ফােয ক্লাস প্লযাে প্রকল্প। এর েক্ষয দ্রেুেে সেজে তবশ্বতবদযােেগুজো তথজক একটি তবশ্বতবদযােেজক তবশ্বোজে পতরেে করা ও প্রতেিাজের পতরসর বাড়াজো।জযজকাজো তকিুর তপিজে অথ য েজে েূে চাতেকা শ্ক্তক্ত। অথ যােে প্রক্তক্রো তবশ্বতবদযােেগুজোজক উৎকৃষ্ট োজের গজবষোে অেুপ্রাতেে কজর। চীো তবশ্বতবদযােজের একাজডতেক গজবষোে তেজোক্তজে বযক্তক্তজদর যজথষ্ট প্রজোদোরও বযবিা রজেজি। তসাংেুোর তেযাবী তশ্ক্ষাথীরা তসরা গজবষক েজে তদজশ্র উন্নেজে কাজ কজরে তকাংবা তদজশ্র েজে তবজদজশ্ গজবষোে তেযুক্ত েে। ২০১৭ সাজে তসাংেুো তবশ্বতবদযােে ১ োজার ৩৮৫ জেজক ডটজরি উপাতয তদজেজি। এজকই সেে যুক্তরাজের েযাসাচুজসিস ইেতেটিউি অব তিকজোেক্তজজে (এেআইটি) ৬৫৪ জেজক ডটজরি তদওো েে। 33 | P a g e Summary Generated by FCM (Article 2) যুক্তরাজজযর সাপ্তাতেক প্রকাশ্ো িাইেস োোর এডুজকশ্ে সােতেকীর সব যজশ্ষ জতরজপ তবজশ্বর শ্ীষ য ১০টি তবশ্বতবদযােজের েজযয চীজের তকাজো তবশ্বতবদযােজের োে তেই। অবশ্য সাাংোই র ্যাক্তঙ্কাংজে শ্ীষ য ৫০০ তবশ্বতবদযােজের েজযয চীজেরই আজি ৪৫টি তবশ্বতবদযােে। আর এ জেয তদশ্টির োেকরা তসাংেুো তবশ্বতবদযােে ও তপতকাং তবশ্বতবদযােজের ওপর আরও তবতশ্ তজার তদজেজি চীো কেৃযপক্ষ। পক্তিো গজবষোতভতিক তবশ্বতবদযােেগুজোর আদজে পতরচাতেে েজে চীজের তসাংেুো তবশ্বতবদযােে ও তপতকাং তবশ্বতবদযােে। ১৯৮৯ সাজে তেজেেআেজেে স্কোজর তবজক্ষাজভ তবশ্বতবদযােেটি অগ্রেী ভূতেকা রাজখ।েন্ডেতভতিক সােতেকী দয ইজকােতেজের প্রতেজবদে বেজি, ১৯৯৫ সাে তথজক চীো তকন্দ্রীে সরকার তদশ্টির তবশ্বতবদযােেগুজো তবজশ্বর শ্ীষ য তবশ্বতবদযােজে উন্নীে করজে োখ োখ ডোর বযে অবযােে তরজখজি। এর আওোে প্রথে ২১১টি প্রকল্প োজে তেওো েজেজি। একতবাংশ্ শ্োব্দীর চযাজেঞ্জ তোকাতবোে প্রাে ১০০টি প্রতেিােজক প্রস্তুে করা েজেজি। ২০১৫ সাে তথজক চােু করা েজেজি ডবে ফােয ক্লাস প্লযাে প্রকল্প। এর েক্ষয</s>
|
<s>দ্রেুেে সেজে তবশ্বতবদযােেগুজো তথজক একটি তবশ্বতবদযােেজক তবশ্বোজে পতরেে করা ও প্রতেিাজের পতরসর বাড়াজো। প্রতেজবশ্ী তদশ্ ভারে োজদর ২০টি তবশ্বতবদযােেজক তবশ্বোজে তেওোর তঘাষো তদজেজি। এেেতক োইজজতরোর েজো তদশ্ ২০২০ সাজের েজযয োজদর অিে দুটি তবশ্বতবদযােেজক তবজশ্বর শ্ীষ য ২০০টির েজযয অিভুযক্তক্তর েক্ষযোো তেয যারে কজরজি।তসাংেুোর তেযাবী তশ্ক্ষাথীরা তসরা গজবষক েজে তদজশ্র উন্নেজে কাজ কজরে তকাংবা তদজশ্র েজে তবজদজশ্ গজবষোে তেযুক্ত েে। ২০১৭ সাজে তসাংেুো তবশ্বতবদযােে ১ োজার ৩৮৫ জেজক ডটজরি উপাতয তদজেজি। এজকই সেে যুক্তরাজের েযাসাচুজসিস ইেতেটিউি অব তিকজোেক্তজজে (এেআইটি) ৬৫৪ জেজক ডটজরি তদওো েে। তসাংেুো তবশ্বতবদযাজের ভাইস তচোরেযাে ইোাং তবে বজেে, ‘তসাংেুোর সবজচজে গুরুত্বপূে য উন্নেে তিে ১৯৭৮ সাজে, যখে তডাং ক্তজোওতপাং (প্রোে রাজেীতেক) বজেে, চীে তবপুেসাংখযক তশ্ক্ষাথী তবজদজশ্ পাোজব। Summary Generated by TextRank Algorithm (Article 2) যুক্তরাজজযর সাপ্তাতেক প্রকাশ্ো িাইেস োোর এডুজকশ্ে সােতেকীর সব যজশ্ষ জতরজপ তবজশ্বর শ্ীষ য ১০টি তবশ্বতবদযােজের েজযয চীজের তকাজো তবশ্বতবদযােজের োে তেই। অবশ্য সাাংোই র ্যাক্তঙ্কাংজে শ্ীষ য ৫০০ তবশ্বতবদযােজের েজযয চীজেরই আজি ৪৫টি তবশ্বতবদযােে। আর এ জেয তদশ্টির োেকরা তসাংেুো তবশ্বতবদযােে ও তপতকাং তবশ্বতবদযােজের ওপর আরও তবতশ্ তজার তদজেজি চীো কেৃযপক্ষ। ১৯১১ সাজে তবইক্তজাংজে প্রতেটিে েে তসাংেুো তবশ্বতবদযােে। শ্োতযক বিজরর পুজরাজো এই তবশ্বতবদযােে এখে গজবষো, তবজ্ঞাে, প্রযুক্তক্ত, প্রজকৌশ্ে ও গতেে তবষজে চীোজদর গজব যর প্রেীক। পক্তিো গজবষোতভতিক তবশ্বতবদযােেগুজোর আদজে পতরচাতেে েজে চীজের তসাংেুো তবশ্বতবদযােে ও তপতকাং তবশ্বতবদযােে। এই দুটি তবশ্বতবদযােে পরস্পর প্রতেজবশ্ী ও প্রতেজযাগী, যা তকো চীজের অক্সজফাডয ও তকেতেজ তেজসজব খযাে। তসাংেুো েজে প্রচতেে ও বাস্তবযেী তবশ্বতবদযােে। আর তপতকাং তবশ্বতবদযােে েজে তদশ্টির কতব, দাশ্ যতেক ও তবপ্লবীজদর েীথ যিাে। চীজের সাজবক শ্ীষ য তেো োও তসেুাং এই তবশ্বতবদযােে গজবষো কজরতিজেে। েন্ডেতভতিক সােতেকী দয ইজকােতেজের প্রতেজবদে বেজি, ১৯৯৫ সাে তথজক চীো তকন্দ্রীে সরকার তদশ্টির তবশ্বতবদযােেগুজো তবজশ্বর শ্ীষ য তবশ্বতবদযােজে উন্নীে করজে োখ োখ ডোর বযে অবযােে তরজখজি। চীো তবশ্বতবদযােজের একাজডতেক গজবষোে তেজোক্তজে বযক্তক্তজদর যজথষ্ট প্রজোদোরও বযবিা রজেজি। প্রযুক্তক্ত ও প্রতেজযাতগোতেভযর তবজশ্বর তবতভন্ন তদজশ্র সরকারও েীতেতেয যারজে পতরবেযে তেজে আসজি। তবশ্বোজের তবশ্বতবদযােে প্রতেিা, উন্নে গজবষো, র ্যাক্তঙ্কাংজে অিভুযক্তক্ত ও অগ্রগতের জেয তশ্ক্ষা খাজে চীে িাড়াও ভারে, তসঙ্গাপুর, দতক্ষে তকাতরো, োইওোে, ফ্রান্স, জাে যাতে তবপুে অথ য বযে করজি। এেেতক োইজজতরোর েজো তদশ্ ২০২০ সাজের েজযয োজদর অিে দুটি তবশ্বতবদযােেজক তবজশ্বর শ্ীষ য ২০০টির েজযয অিভুযক্তক্তর েক্ষযোো তেয যারে কজরজি। ২০১৭ সাজে তসাংেুো তবশ্বতবদযােে ১ োজার ৩৮৫ জেজক ডটজরি উপাতয তদজেজি। অবশ্য এই সাংখযা তসাংেুো তবশ্বতবদযােজের সাফজেযর প্রযাে কারে েে। তসাংেুো তবশ্বতবদযাজের ভাইস তচোরেযাে ইোাং তবে বজেে, ‘তসাংেুোর সবজচজে গুরুত্বপূে য উন্নেে তিে ১৯৭৮ সাজে, যখে তডাং ক্তজোওতপাং (প্রোে রাজেীতেক) বজেে, চীে তবপুেসাংখযক তশ্ক্ষাথী তবজদজশ্ পাোজব.’ তেতে আরও বজেে, ‘১০ োজার তশ্ক্ষাথী তবজদজশ্ পাোজো প্রজোজে। ৪০ বির যজর তসাংেুো এবাং তদজশ্র অেয শ্ীষ য তবশ্বতবদযােেগুজো োজদর কৃতেত্ব যজর তরজখজি। এসব তবশ্বতবদযােজের Evaluation and Results 34 | P a g e প্রতে আকষ যে বাড়াজে সরকারও অতেতরক্ত সম্পদ ও প্রজোজেীে উপকরে সরবরাে কজর আসজি। তবজশ্বর শ্ীষ য জাে যােগুজো ইাংজরক্তজ ভাষাে তেতখে ও প্রকাতশ্ে েে, যা চীো তবজ্ঞােীজদর জেয প্রতেবন্ধকো বেতর কজর। তোদ্দা কথা েজে, তশ্ক্ষাজক্ষজে তবপ্লব ঘিাজে এবাং তবজশ্বর শ্ীষ য তবশ্বতবদযােজের কাোজর চীজের তবশ্বতবদযােেজক অিভুযক্তক্ত করজে যা করা প্রজোজে, ো–ই োরা কজর যাজে। অক্সজফাডয তবশ্বতবদযােজের অযযাপক ও তসাংেুো জাে যাে অব এডুজকশ্জের সম্পাদকীে পতরষজদর সদসয তসেে োক্তজযেসে বজেে, আগােী পাাঁচ বির তকাংবা এর কে</s>
|
<s>সেজের েজযয োম্বার ওোে তবশ্বতবদযােে েজব তসাংেুো। Aggregate Summary (Article 2) যুক্তরাজজযর সাপ্তাতেক প্রকাশ্ো িাইেস োোর এডুজকশ্ে সােতেকীর সব যজশ্ষ জতরজপ তবজশ্বর শ্ীষ য ১০টি তবশ্বতবদযােজের েজযয চীজের তকাজো তবশ্বতবদযােজের োে তেই। পক্তিো গজবষোতভতিক তবশ্বতবদযােেগুজোর আদজে পতরচাতেে েজে চীজের তসাংেুো তবশ্বতবদযােে ও তপতকাং তবশ্বতবদযােে।েন্ডেতভতিক সােতেকী দয ইজকােতেজের প্রতেজবদে বেজি, ১৯৯৫ সাে তথজক চীো তকন্দ্রীে সরকার তদশ্টির তবশ্বতবদযােেগুজো তবজশ্বর শ্ীষ য তবশ্বতবদযােজে উন্নীে করজে োখ োখ ডোর বযে অবযােে তরজখজি। ২০১৫ সাে তথজক চােু করা েজেজি ডবে ফােয ক্লাস প্লযাে প্রকল্প। এর েক্ষয দ্রেুেে সেজে তবশ্বতবদযােেগুজো তথজক একটি তবশ্বতবদযােেজক তবশ্বোজে পতরেে করা ও প্রতেিাজের পতরসর বাড়াজো। এেেতক োইজজতরোর েজো তদশ্ ২০২০ সাজের েজযয োজদর অিে দুটি তবশ্বতবদযােেজক তবজশ্বর শ্ীষ য ২০০টির েজযয অিভুযক্তক্তর েক্ষযোো তেয যারে কজরজি।তসাংেুোর তেযাবী তশ্ক্ষাথীরা তসরা গজবষক েজে তদজশ্র উন্নেজে কাজ কজরে তকাংবা তদজশ্র েজে তবজদজশ্ গজবষোে তেযুক্ত েে। ২০১৭ সাজে তসাংেুো তবশ্বতবদযােে ১ োজার ৩৮৫ জেজক ডটজরি উপাতয তদজেজি। এজকই সেে যুক্তরাজের েযাসাচুজসিস ইেতেটিউি অব তিকজোেক্তজজে (এেআইটি) ৬৫৪ জেজক ডটজরি তদওো েে। 35 | P a g e Chapter 5 Conclusion As the world progresses in this Information Technology era, research in the Bengali language becomes more and more important. A text summarization system holds significance because of the importance of saving time, effort and also data. Text summarization can have two schools of thoughts: extractive summarization, and abstractive summarization. While the output from the abstractive method of summarization is more natural and coherent, it needs more processing and the complexity of the program is too high. As such, the extractive method to summarization provides a greater trade-off value due to the lower computational requirements. In this paper, an FCM based algorithm is used in conjunction with 6 sentence scoring methods to find the most important sentences. For the purpose of a comparative study, a TextRank algorithm was used to generate a summary along with an Aggregate scoring algorithm. TextRank algorithm simply uses a similarity measure to find the most useful sentences in an article. The Aggregate Scoring algorithm also uses the 6 scoring methods and the sum of all 6 scores from these methods are added to calculate aggregate scores for each sentence. These scores are then finally sorted in descending order, and then the top scoring sentences are then printed in original order to form an extracted summary. An FCM based algorithm tends to return higher F-number as well as a higher number of relevant sentences (sentences that are also found in the Gold summary). In the future, an FCM based algorithm in conjunction to word-based scoring as well as more sentence based scoring methods can be used for further improvements in extractive summarization techniques. FCM can also be modified and implemented in abstractive text summarization for better human like summaries. Furthermore, we can also implement Automatic Text Sumarization by the help of a web plugin or mobile application which would automatically scrape data from websites and generate a real time text summary. 36 | P a g e 37 | P a g e References [1] Islam, M. T., & Al Masum, S.</s>
|
<s>M. (2004, December). Bhasa: A corpus-based information retrieval and summariser for bengali text. In Proceedings of the 7th International Conference on Computer and Information Technology. [2] Uddin, M. N., & Khan, S. A. (2007, December). A study on text summarization techniques and implement few of them for Bangla language. In Computer and information technology, 2007. iccit 2007. 10th international conference on (pp. 1-4). IEEE. [3] Sarkar, K. (2012, August). An approach to summarizing Bengali news documents. In proceedings of the International Conference on Advances in Computing, Communications and Informatics (pp. 857-862). ACM. [4] Efat, M. I. A., Ibrahim, M., & Kayesh, H. (2013, May). Automated Bangla text summarization by sentence scoring and ranking. In Informatics, Electronics & Vision (ICIEV), 2013 International Conference on (pp. 1-5). IEEE. [5] Das, A., & Bandyopadhyay, S. (2010, August). Topic-based Bengali opinion summarization. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters (pp. 232-240). Association for Computational Linguistics. [6] Sarkar, K. (2012). Bengali text summarization by sentence extraction. arXiv preprint arXiv:1201.2240. [7] Abujar, S., Hasan, M., Shahin, M. S. I., & Hossain, S. A. (2017, July). A heuristic approach of text summarization for Bengali documentation. In Computing, Communication and Networking Technologies (ICCCNT), 2017 8th International Conference on (pp. 1-8). IEEE. [8] Akter, S., Asa, A. S., Uddin, M. P., Hossain, M. D., Roy, S. K., & Afjal, M. I. (2017, February). An extractive text summarization technique for Bengali document (s) using K-means clustering algorithm. In Imaging, Vision & Pattern Recognition (icIVPR), 2017 IEEE International Conference on(pp. 1-6). IEEE. [9] Luhn, H. P. (1958). The automatic creation of literature abstracts. IBM Journal of research and development, 2(2), 159-165. [10] Moratanch, N., & Chitrakala, S. (2016, March). A survey on abstractive text summarization. In Circuit, Power and Computing Technologies (ICCPCT), 2016 International Conference on (pp. 1-7). IEEE. [11] Andhale, N., & Bewoor, L. A. (2016, August). An overview of text summarization techniques. In Computing Communication Control and automation (ICCUBEA), 2016 International Conference on (pp. 1-7). IEEE. [12] Moratanch, N., & Chitrakala, S. (2017, January). A survey on extractive text summarization. In Computer, Communication and Signal Processing (ICCCSP), 2017 International Conference on (pp. 1-6). IEEE. References 38 | P a g e [13] Krishnaveni, P., & Balasundaram, S. R. (2017, July). Automatic text summarization by local scoring and ranking for improving coherence. In Computing Methodologies and Communication (ICCMC), 2017 International Conference on(pp. 59-64). IEEE. [14] Vijay, S., Rai, V., Gupta, S., Vijayvargia, A., & Sharma, D. M. (2017, December). Extractive text summarisation in hindi. In Asian Language Processing (IALP), 2017 International Conference on (pp. 318-321). IEEE. [15] Galarnyk, M. “PCA Using Python (Scikit-Learn) – Towards Data Science.” Towards Data Science, Towards Data Science,5 Dec. 2017, towardsdatascience.com/pca-using-python-scikit-learn-e653f8989e60. [16] Tian, S. (2017). A hybrid debris flow hazard degree analysis model based on PCA and SFLA-FCM. Revista de la Facultad de Ingeniería, 31(9). [17] Haque, M., Pervin, S., & Begum, Z. (2017). An Innovative Approach of Bangla Text Summarization by Introducing Pronoun Replacement and Improved Sentence Ranking. Journal of</s>
|
<s>Information Processing Systems, 13(4). [18] Patil, D. B., & Dongre, Y. V. (2015). A fuzzy approach for text mining. IJ Mathematical Sciences and Computing, 4, 34-43. [19] Dunn, J. C. (1973). A fuzzy relative of the ISODATA process and its use in detecting compact well-separated clusters. [20] Bezdek, J. C. (1981). Objective function clustering. In Pattern recognition with fuzzy objective function algorithms (pp. 43-93). Springer, Boston, MA. [21] Witte, R., & Bergler, S. (2007). Fuzzy clustering for topic analysis and summarization of document collections. In Advances in Artificial Intelligence (pp. 476-488). Springer, Berlin, Heidelberg. [22] Pole, K. R., & Mote, V. R. (2017, October). Improvised fuzzy clustering using name entity recognition and natural language processing. In Intelligent Systems and Information Management (ICISIM), 2017 1st International Conference on(pp. 123-126). IEEE. [23] Kamal, R. (2014). rafi-kamal/Bangla-Stemmer. [online] GitHub. Available at: https://github.com/rafi-kamal/Bangla-Stemmer. [24] Mihalcea, R., & Tarau, P. (2004). Textrank: Bringing order into text. In Proceedings of the 2004 conference on empirical methods in natural language processing. [25] Langville, A. N., & Meyer, C. D. (2011). Google's PageRank and beyond: The science of search engine rankings. Princeton University Press. [26] Li, W., & Zhao, J. (2016, July). TextRank algorithm by exploiting Wikipedia for short text keywords extraction. In 2016 3rd International Conference on Information Science and Control Engineering (ICISCE) (pp. 683-686). IEEE. [27] Ross, T. J. (2005). Fuzzy logic with engineering applications. John Wiley & Sons. [28] (n.d.). Retrieved from https://home.deib.polimi.it/matteucc/Clustering/tutorial_html/cmeans.html [29] Josh Warner, Jason Sexauer, scikit-fuzzy, twmeggs, Alexandre M. S., Aishwarya Unnikrishnan, … Himanshu Mishra. (2017, October 6). JDWarner/scikit-fuzzy: Scikit-Fuzzy 0.3.1 (Version 0.3.1). Zenodo. doi:10.5281/zenodo.1002946 [30] DavidBelicza. (2018, October 08). DavidBelicza/PHP-Science-TextRank. Retrieved from https://github.com/DavidBelicza/PHP-Science-TextRank [31] Research work on Bangla NLP. (n.d.). Retrieved from http://www.bnlpc.org/research.php https://github.com/rafi-kamal/Bangla-Stemmerhttps://home.deib.polimi.it/matteucc/Clustering/tutorial_html/cmeans.htmlhttps://github.com/DavidBelicza/PHP-Science-TextRankhttp://www.bnlpc.org/research.php39 | P a g e [32] Lin, C. Y. (2004). Rouge: A package for automatic evaluation of summaries. Text Summarization Branches Out.</s>
|
<s>Sentence Based Topic Modeling using Lexical AnalysisShahinur Rahman1, Shikh Abujar1, S. M. Mazharul Hoque Chowdhury1, Mohd Saifuzzaman1, Syed Akhter Hossain1{shahinur3606, sheikh.cse, mazharul2213, saifuzzaman.cse}@diu.edu.bd, aktarhossain@daffodilvarsity.edu.bd1Department of computer science and engineering, Daffodil International University, Dhaka, BangladeshAbstract. In every second in this world we are generating tons of data in the internet in different format. Most of them are in text format. Therefore, the demand of topic modeling is higher than ever right now. Data scientists are working day and night to make it more effective and accurate using different methods. Topic modeling focuses on the keywords that can express or identify the topic discussed in the document. Topic modeling can save a lot of time by releasing its user from page to page manual reviewing. In this paper a model has been proposed to find out topic of a document. This model works based on the relations between most frequent words and their relation with sentences in the document. This model can be used to increase the accuracy of the topic modeling. Keywords: algorithm, frequent, method, relation, segmentation, sentence, summarization, topic modeling, word.1. Introduction:Now-a-days with development and increasing use of the internet, a reader not only read the document but also they contribute information publicly. From that large amount of information, it is quite difficult to sort a particular or desired word. So it has becomes more important to compress and summarize data. From that huge amount of data, extracting information using manual method is actually incompetents [1].In text mining one of the most important problem is text summarization. In text summarization a lots of collection text are make smaller and neatly packed together text which is represent the meaning of main text. Text summarization helps to understand huge amount of text easily which saves a lot of time.Text summarization are divided into two types. They are single document text summarization and multi document text summarization. In a single text summarization a large size of single text is summarized to another single document summary. In multi text summarization a set of documents are summarized to a single document summary. In both approach a large amount of data are summarized and stored in a single file.In text summarization, extractive summarization is a common and mature technique which is extracted important sentence then recombine them and generate a summary base of this sentence.In topic modeling, Latent Dirichlet Allocation (LDA) model is used to explore topic firstly. First, from a document, we extract sentence association with the most frequent word. Now it is possible to find out relations from high scored words and high length sentences.2. Literature Review Nowadays there are lots of numbers studies about topic modeling but rarely have been small number of studies in Bengali Sentence Based topic modeling. In thin paper we use LDA model with lexical analysis to extract Topic from large collect of information.LDA is one of the most common method to extract topic modeling from different types of data examples that use auxiliary information are the author-topic model</s>
|
<s>(Rosen-Zvi et al.,2004), tag -topic model(Tsai,2011) and Topic-Link LDA (Liu et al., 2009). All of this work has been done in English. Previously (Geetanjali and Pushpak) analysis Bengali Poet classification and Identification and Amitava and Sivaji (2010) have done to analysis a document and find out Opinion base summarization but there is no work related about Topic modeling in Bengali.In this current year, some good research has been by different researchers all over the world. Jiang and Zhou (2017) worked on topic modeling based on the poison decomposition. In this work they tried to find out statistical results based on multidimensional characteristics of the topic [7]. Now at the same time, Truică and his team worked on this same topic using contextual cause. They applied automatic term recognition system using contextual cause for topic modeling. Another researcher named Ruohonen (2017) tried to classify web exploits using topic modeling [8]. Karami, Gangopadhyay, Zhou and Kharrazi (2017) worked on Fuzzy approach. Their target was health and media corpora topic modeling. Fuzzy approach was used to analyze medical document and extract information [9]. Work on probabilistic topic model has been done by Zhai (2017) for text data retrieval and analysis [10]. In this research work topic modeling has been done based on sentiment analysis. As sentiment is a very important part of a sentence and main focus depends on the sentiment, therefore by finding out the sentiment expressed in the sentence it is possible to find out main keywords. Those keywords was used for topic modeling.3. Proposed MethodFigure 1: Data flow of proposed model4. Data PreprocessFor text summarization, total collected data from online news portal and its more than 12000 thousand. To summarize each summary consists of three parts and its Beginning, body and ending part. Before summarization it is needed to preprocess the text document. Therefore, chapter segmentation, sentence segmentation, word segmentation occurs then it removes stop words and finally stemming.4.1 Chapter segmentation: Basically, a text document consist of lots of chapters and each chapter is dependent on the other chapter. First, it is needed to separate text into many chapters that’s why chapter segmentation was used to divide the whole text. Chapter segmentation is a process of dividing the text into the meaningful unit. From each unit, different meaning can be extracted to summarize data.4.2 Sentence Segmentation: From a large collection of data, it's much difficult to make a meaningful summary. In a text document each sentence disclosure a meaning. Sentence segmentation is splitting the text into a sentence. After that whole text is exposed a set of sentences. We use NLTK toolkit in our work to separate sentence from given text. i.e. S = মানুষ হিসেবে সবারই কিছু ইতিবাচক দিক রয়েছে।আবার বিপরীত মেরুতেই রয়েছে বিভিন্ন দোষত্রুটি।নিজেরাই অনেক সময় ভুলগুলো নিয়ে সচেতন থাকি না।Table 1: Sentences separated from document Sentence Number Sentence মানুষ হিসেবে সবারই কিছু ইতিবাচক দিক রয়েছে আবার বিপরীত মেরুতেই রয়েছে বিভিন্ন দোষত্রুটি নিজেরাই অনেক সময় ভুলগুলো নিয়ে সচেতন থাকি না4.3 Word segmentation: Word segmentation is referring to extract</s>
|
<s>sentence as a set of independent words. Generally, in Bengali and some other language using space is separator from one word to another word. 4.4 Remove stop words: Stop words are a set of commonly used words in any language which has no actual meaning. It’s just use to make a sentence but doesn’t carry any tangible meaning. So, we need to remove the stop word from the text. To remove stop word in our paper we use NLTK tools.4.5 Stemming: In this step clean data are collected and stored in a document. Data are now ready to be analyzed and further processing.5. Calculate Sentence Score In sentence segmentation, just separation of sentence happened and set of sentences were created. But basically no one has clear idea about the value of the word inside the sentence. To calculate the length of each sentence we count how many words are in a sentence and scoring. Then sorting this sentence based on high score to low. i.e. we have set of sentences which is S= {নদী আর নদী পাড়ের মানুষের জীবন জীবিকা নিয়ে এর আগে গৌতম ঘোষের পদ্মা নদীর মাঝি দেখেছিলাম।মানিকের উপন্যাসের মত করে নয়, গৌতম নিজের মত করে পদ্মা পাড়ের জেলেদের জীবনের বাস্তবতাকে ফুটিয়ে তুলেছিলেন}Table 2: Sentence Scoring Sentence Number Sentence Length নদী আর নদী পাড়ের মানুষের জীবন জীবিকা নিয়ে এর আগে গৌতম ঘোষের পদ্মা নদীর মাঝি দেখেছিলাম গৌতম নিজের মত করে পদ্মা পাড়ের জেলেদের জীবনের বাস্তবতাকে ফুটিয়ে তুলেছিলেন6. Calculate Word Score To find out the most frequent word we have to calculate the score of each word is most important because after calculating the score of each word it's visible which group word is most used and we take which group of words has scored more than average. If a word used in many sentences for ten times that means this word score is 10.7. Topic modeling Latent Dirichlet Allocation (LDA) is one of most popular topic modeling which is followed by data preprocessing. It helps to search the topic words related to a document. LDA obtain feature words list for each topic. For example, if in a collection of words, there are numbers of words like as “sports”, “play” are used mostly then consider that it becomes a sports news. Consider the following equationX = ……………………….. (1)Count ………………………... (2) Here, X is the total value of valid word and is the Total word, is Total Preposition, is Total Nouns, is Total Articles and is to be verbs. After calculating the total value of valid word, we find out probability of each word following the equation. ……………………………………. (3) …………………………………….. (4) …………………………………….. (5)If probability of is bigger then then is respectively most probability to become the related topic. On the other hand, if the probability of are bigger then the topic would be the biggest probability related word.8. Filtering/Smoothing In this stage we are going to find out the score for the sentences and relations between the sentence and words. For that we are proposing a method. As before we explained</s>
|
<s>that sentence length is the score for the sentence. This time sentence score will be the number of valid words were used in the sentence. To do that it is necessary to remove determinate, nouns, articles etc. Therefore, a set of clear data will be stored in the sentence. Now it is time to find out the expected topic of the document. We can write the process as,S1 = [w1, w2, w3……...wn] ……………………. (6)S2 = [w1, w2, w3……...wn] ……………………. (7)For equation 6 and 7 if process starts, then If S1[i] = S2[i]Match foundElse S2[i++]Go to Start9. Document Summary Suppose, the word ‘Bangladesh’ got the highest value and for this word we got 7 sentences. This time those sentences can be compared with each other using a two-dimensional array. This process will go on until next five top words to ensure maximum accuracy. After processing, most frequent matching will be taken as the best value and corresponding word will be the topic of the document. 10. Final OutcomeIn this process it is possible to find out best possible output topic for a document. This can be used for any language as well as Bangla. Using Bangla language corpus data processing and other things can be done much more efficient way. Conclusion In this era of technology data bought us new opportunity as well as new complexity. Handling new data require new method, sometimes new technology. Reading files one by one to find out topic of the document is one of the toughest task now a day. If a simple system can solve this why should not we use this to proper use our brain and time in other important tasks. Topic modeling is a very important sector of data science and requires very large amount of research work. This system can be a simple but efficient method to use topic modeling for different languages. Acknowledgement We would like to thank DIU - NLP and Machine Learning Research LAB for all their support and help. Any error in this research paper is our own and should not tarnish the reputations of these esteemed persons. References1. Mahak Gambhir, Vishal Gupta, Recent automatic text summarization techniques: a survey, Artificial Intelligence Review, v.47 n.1, p.1-66, January 2017 [doi>10.1007/s10462-016-9475-9]2. Rosen-Zvi, M., Griffiths, T., Steyvers, M., and Smyth, P. (2004). The author-topic model for authors and documents. In Proceedings of the 20th conference on Uncertainty in artificial intelligence, pages 487–4943. Tsai, F. S. (2011). A tag-topic model for blog mining. Expert Systems with Applications, 38(5):5330–53354. Liu, Y., Niculescu-Mizil, A., and Gryc, W. (2009). Topic-link LDA: joint models of topic and author community. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 665–672.5. Geetanjali Rakshit, Anupam Ghosh, Pushpak Bhattacharyya, Gholamreza Haffri. Automated Analysis of Bangla Poetry for Classifiation and Poet Identifiation. IITB-Monash Research Academy, India, 2IIT Bombay, India 3Monash University, Australia6. Amitava Das and Sivaji Bandyopadhyay. Topic-Based Bengali Opinion Summarization.7. Jiang, H., Zhou, R., Zhang, L., Zhang, Y.: A Topic Model Based on Poisson Decomposition.</s>
|
<s>In: Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, pp- 1489-1498, Singapore, November. (2017)8. Ruohonen, J.: Classifying Web Exploits with Topic Modeling. In: 28th International Workshop on Database and Expert Systems Applications, DOI. 10.1109, DEXA. 2017.35, Lyon, France (2017)9. Karami, A., Gangopadhyay, A., Zhou B., Kharrazi, H.: Fuzzy Approach Topic Modeling for Health and Medical Corpora, In: International Journal of Fuzzy Systems (2017)10. Zhai, C.: Probabilistic Topic Models for Text Data Retrieval and Analysis. In: 40th International ACM SIGIR Conference, pp. 1399-1401, Shinjuku, Tokyo, Japan (2017)image1.png</s>
|
<s>untitled /ASCII85EncodePages false /AllowTransparency false /AutoPositionEPSFiles false /AutoRotatePages /None /Binding /Left /CalGrayProfile (Gray Gamma 2.2) /CalRGBProfile (sRGB IEC61966-2.1) /CalCMYKProfile (U.S. Web Coated \050SWOP\051 v2) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Warning /CompatibilityLevel 1.4 /CompressObjects /Off /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages true /CreateJobTicket false /DefaultRenderingIntent /Default /DetectBlends true /DetectCurves 0.0000 /ColorConversionStrategy /LeaveColorUnchanged /DoThumbnails false /EmbedAllFonts true /EmbedOpenType false /ParseICCProfilesInComments true /EmbedJobOptions true /DSCReportingLevel 0 /EmitDSCWarnings false /EndPage -1 /ImageMemory 1048576 /LockDistillerParams true /MaxSubsetPct 100 /Optimize false /OPM 0 /ParseDSCComments false /ParseDSCCommentsForDocInfo false /PreserveCopyPage true /PreserveDICMYKValues true /PreserveEPSInfo false /PreserveFlatness true /PreserveHalftoneInfo true /PreserveOPIComments false /PreserveOverprintSettings true /StartPage 1 /SubsetFonts false /TransferFunctionInfo /Remove /UCRandBGInfo /Preserve /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true /Arial-Black /Arial-BoldItalicMT /Arial-BoldMT /Arial-ItalicMT /ArialMT /ArialNarrow /ArialNarrow-Bold /ArialNarrow-BoldItalic /ArialNarrow-Italic /ArialUnicodeMS /BookAntiqua /BookAntiqua-Bold /BookAntiqua-BoldItalic /BookAntiqua-Italic /BookmanOldStyle /BookmanOldStyle-Bold /BookmanOldStyle-BoldItalic /BookmanOldStyle-Italic /BookshelfSymbolSeven /Century /CenturyGothic /CenturyGothic-Bold /CenturyGothic-BoldItalic /CenturyGothic-Italic /CenturySchoolbook /CenturySchoolbook-Bold /CenturySchoolbook-BoldItalic /CenturySchoolbook-Italic /ComicSansMS /ComicSansMS-Bold /CourierNewPS-BoldItalicMT /CourierNewPS-BoldMT /CourierNewPS-ItalicMT /CourierNewPSMT /EstrangeloEdessa /FranklinGothic-Medium /FranklinGothic-MediumItalic /Garamond /Garamond-Bold /Garamond-Italic /Gautami /Georgia /Georgia-Bold /Georgia-BoldItalic /Georgia-Italic /Haettenschweiler /Impact /Kartika /Latha /LetterGothicMT /LetterGothicMT-Bold /LetterGothicMT-BoldOblique /LetterGothicMT-Oblique /LucidaConsole /LucidaSans /LucidaSans-Demi /LucidaSans-DemiItalic /LucidaSans-Italic /LucidaSansUnicode /Mangal-Regular /MicrosoftSansSerif /MonotypeCorsiva /MSReferenceSansSerif /MSReferenceSpecialty /MVBoli /PalatinoLinotype-Bold /PalatinoLinotype-BoldItalic /PalatinoLinotype-Italic /PalatinoLinotype-Roman /Raavi /Shruti /Sylfaen /SymbolMT /Tahoma /Tahoma-Bold /TimesNewRomanMT-ExtraBold /TimesNewRomanPS-BoldItalicMT /TimesNewRomanPS-BoldMT /TimesNewRomanPS-ItalicMT /TimesNewRomanPSMT /Trebuchet-BoldItalic /TrebuchetMS /TrebuchetMS-Bold /TrebuchetMS-Italic /Tunga-Regular /Verdana /Verdana-Bold /Verdana-BoldItalic /Verdana-Italic /Vrinda /Webdings /Wingdings2 /Wingdings3 /Wingdings-Regular /ZWAdobeF /NeverEmbed [ true /AntiAliasColorImages false /CropColorImages true /ColorImageMinResolution 200 /ColorImageMinResolutionPolicy /OK /DownsampleColorImages true /ColorImageDownsampleType /Bicubic /ColorImageResolution 300 /ColorImageDepth -1 /ColorImageMinDownsampleDepth 1 /ColorImageDownsampleThreshold 1.50000 /EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages false /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /ColorImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 200 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 1.50000 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages false /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /GrayImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 400 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 600 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.50000 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1 /AllowPSXObjects false /CheckCompliance [ /None /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXOutputIntentProfile (None) /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False /CreateJDFFile false /Description << /CHS <FEFF4f7f75288fd94e9b8bbe5b9a521b5efa7684002000410064006f006200650020005000440046002065876863900275284e8e55464e1a65876863768467e5770b548c62535370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c676562535f00521b5efa768400200050004400460020658768633002> /CHT <FEFF4f7f752890194e9b8a2d7f6e5efa7acb7684002000410064006f006200650020005000440046002065874ef69069752865bc666e901a554652d965874ef6768467e5770b548c52175370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c4f86958b555f5df25efa7acb76840020005000440046002065874ef63002> /DAN <FEFF004200720075006700200069006e0064007300740069006c006c0069006e006700650072006e0065002000740069006c0020006100740020006f007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400650072002c0020006400650072002000650067006e006500720020007300690067002000740069006c00200064006500740061006c006a006500720065007400200073006b00e60072006d007600690073006e0069006e00670020006f00670020007500640073006b007200690076006e0069006e006700200061006600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020004400650020006f007000720065007400740065006400650020005000440046002d0064006f006b0075006d0065006e0074006500720020006b0061006e002000e50062006e00650073002000690020004100630072006f00620061007400200065006c006c006500720020004100630072006f006200610074002000520065006100640065007200200035002e00300020006f00670020006e0079006500720065002e> /DEU <FEFF00560065007200770065006e00640065006e0020005300690065002000640069006500730065002000450069006e007300740065006c006c0075006e00670065006e0020007a0075006d002000450072007300740065006c006c0065006e00200076006f006e002000410064006f006200650020005000440046002d0044006f006b0075006d0065006e00740065006e002c00200075006d002000650069006e00650020007a0075007600650072006c00e40073007300690067006500200041006e007a006500690067006500200075006e00640020004100750073006700610062006500200076006f006e00200047006500730063006800e40066007400730064006f006b0075006d0065006e00740065006e0020007a0075002000650072007a00690065006c0065006e002e00200044006900650020005000440046002d0044006f006b0075006d0065006e007400650020006b00f6006e006e0065006e0020006d006900740020004100630072006f00620061007400200075006e0064002000520065006100640065007200200035002e003000200075006e00640020006800f600680065007200200067006500f600660066006e00650074002000770065007200640065006e002e> /ESP <FEFF005500740069006c0069006300650020006500730074006100200063006f006e0066006900670075007200610063006900f3006e0020007000610072006100200063007200650061007200200064006f00630075006d0065006e0074006f0073002000640065002000410064006f00620065002000500044004600200061006400650063007500610064006f007300200070006100720061002000760069007300750061006c0069007a00610063006900f3006e0020006500200069006d0070007200650073006900f3006e00200064006500200063006f006e006600690061006e007a006100200064006500200064006f00630075006d0065006e0074006f007300200063006f006d00650072006300690061006c00650073002e002000530065002000700075006500640065006e00200061006200720069007200200064006f00630075006d0065006e0074006f00730020005000440046002000630072006500610064006f007300200063006f006e0020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e003000200079002000760065007200730069006f006e0065007300200070006f00730074006500720069006f007200650073002e> /FRA <FEFF005500740069006c006900730065007a00200063006500730020006f007000740069006f006e00730020006100660069006e00200064006500200063007200e900650072002000640065007300200064006f00630075006d0065006e00740073002000410064006f006200650020005000440046002000700072006f00660065007300730069006f006e006e0065006c007300200066006900610062006c0065007300200070006f007500720020006c0061002000760069007300750061006c00690073006100740069006f006e0020006500740020006c00270069006d007000720065007300730069006f006e002e0020004c0065007300200064006f00630075006d0065006e00740073002000500044004600200063007200e900e90073002000700065007500760065006e0074002000ea0074007200650020006f007500760065007200740073002000640061006e00730020004100630072006f006200610074002c002000610069006e00730069002000710075002700410064006f00620065002000520065006100640065007200200035002e0030002000650074002000760065007200730069006f006e007300200075006c007400e90072006900650075007200650073002e> /ITA (Utilizzare queste impostazioni per creare documenti Adobe PDF adatti per visualizzare e stampare documenti aziendali in modo affidabile. I documenti PDF creati possono essere aperti con Acrobat e Adobe Reader 5.0 e versioni successive.) /JPN <FEFF30d330b830cd30b9658766f8306e8868793a304a3088307353705237306b90693057305f002000410064006f0062006500200050004400460020658766f8306e4f5c6210306b4f7f75283057307e305930023053306e8a2d5b9a30674f5c62103055308c305f0020005000440046002030d530a130a430eb306f3001004100630072006f0062006100740020304a30883073002000410064006f00620065002000520065006100640065007200200035002e003000204ee5964d3067958b304f30533068304c3067304d307e305930023053306e8a2d5b9a3067306f30d530a930f330c8306e57cb30818fbc307f3092884c3044307e30593002> /KOR <FEFFc7740020c124c815c7440020c0acc6a9d558c5ec0020be44c988b2c8c2a40020bb38c11cb97c0020c548c815c801c73cb85c0020bcf4ace00020c778c1c4d558b2940020b3700020ac00c7a50020c801d569d55c002000410064006f0062006500200050004400460020bb38c11cb97c0020c791c131d569b2c8b2e4002e0020c774b807ac8c0020c791c131b41c00200050004400460020bb38c11cb2940020004100630072006f0062006100740020bc0f002000410064006f00620065002000520065006100640065007200200035002e00300020c774c0c1c5d0c11c0020c5f40020c2180020c788c2b5b2c8b2e4002e> /NLD (Gebruik deze instellingen om Adobe PDF-documenten te maken waarmee zakelijke documenten betrouwbaar kunnen worden weergegeven en afgedrukt. De gemaakte PDF-documenten kunnen worden geopend met Acrobat en Adobe Reader 5.0 en hoger.)</s>
|
<s>/NOR <FEFF004200720075006b00200064006900730073006500200069006e006e007300740069006c006c0069006e00670065006e0065002000740069006c002000e50020006f0070007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e00740065007200200073006f006d002000650072002000650067006e0065007400200066006f00720020007000e5006c006900740065006c006900670020007600690073006e0069006e00670020006f00670020007500740073006b007200690066007400200061007600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020005000440046002d0064006f006b0075006d0065006e00740065006e00650020006b0061006e002000e50070006e00650073002000690020004100630072006f00620061007400200065006c006c00650072002000410064006f00620065002000520065006100640065007200200035002e003000200065006c006c00650072002e> /PTB <FEFF005500740069006c0069007a006500200065007300730061007300200063006f006e00660069006700750072006100e700f50065007300200064006500200066006f0072006d00610020006100200063007200690061007200200064006f00630075006d0065006e0074006f0073002000410064006f00620065002000500044004600200061006400650071007500610064006f00730020007000610072006100200061002000760069007300750061006c0069007a006100e700e3006f002000650020006100200069006d0070007200650073007300e3006f00200063006f006e0066006900e1007600650069007300200064006500200064006f00630075006d0065006e0074006f007300200063006f006d0065007200630069006100690073002e0020004f007300200064006f00630075006d0065006e0074006f00730020005000440046002000630072006900610064006f007300200070006f00640065006d0020007300650072002000610062006500720074006f007300200063006f006d0020006f0020004100630072006f006200610074002000650020006f002000410064006f00620065002000520065006100640065007200200035002e0030002000650020007600650072007300f50065007300200070006f00730074006500720069006f007200650073002e> /SUO <FEFF004b00e40079007400e40020006e00e40069007400e4002000610073006500740075006b007300690061002c0020006b0075006e0020006c0075006f0074002000410064006f0062006500200050004400460020002d0064006f006b0075006d0065006e007400740065006a0061002c0020006a006f0074006b006100200073006f0070006900760061007400200079007200690074007900730061007300690061006b00690072006a006f006a0065006e0020006c0075006f00740065007400740061007600610061006e0020006e00e400790074007400e4006d0069007300650065006e0020006a0061002000740075006c006f007300740061006d0069007300650065006e002e0020004c0075006f0064007500740020005000440046002d0064006f006b0075006d0065006e00740069007400200076006f0069006400610061006e0020006100760061007400610020004100630072006f0062006100740069006c006c00610020006a0061002000410064006f00620065002000520065006100640065007200200035002e0030003a006c006c00610020006a006100200075007500640065006d006d0069006c006c0061002e> /SVE <FEFF0041006e007600e4006e00640020006400650020006800e4007200200069006e0073007400e4006c006c006e0069006e006700610072006e00610020006f006d002000640075002000760069006c006c00200073006b006100700061002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400200073006f006d00200070006100730073006100720020006600f60072002000740069006c006c006600f60072006c00690074006c006900670020007600690073006e0069006e00670020006f006300680020007500740073006b007200690066007400650072002000610076002000610066006600e4007200730064006f006b0075006d0065006e0074002e002000200053006b006100700061006400650020005000440046002d0064006f006b0075006d0065006e00740020006b0061006e002000f600700070006e00610073002000690020004100630072006f0062006100740020006f00630068002000410064006f00620065002000520065006100640065007200200035002e00300020006f00630068002000730065006e006100720065002e> /ENU (Use these settings to create PDFs that match the "Required" settings for PDF Specification 4.01)>> setdistillerparams /HWResolution [600 600] /PageSize [612.000 792.000]>> setpagedevice</s>
|
<s>A Novel Training Based Concatenative Bangla Speech Synthesizer Model Firoz Mahmud, MD. Abdullah-al-MAMUN, Mumu Aktar, Shyla Afroge Department of Computer Science & Engineering Rajshahi University of Engineering & Technology Rajshahi-6204, Bangladesh. fmahmud.ruet@gmail.com Abstract— In the modern era of information technology, information is carried out in various ways to lead human life easily. Information can be exchanged among people in various ways and speech is the primary communication process among human beings. A TTS (Text-to-Speech) is used to convert input text to speech, and it’s very popular application for computer users. Although different types of speech synthesis technologies are available for the English, France, Chinese and so many other languages, but in Bengali language, it’s so scarce. This paper represents the implemented process of training based Concatenative Bangle Speech Synthesizer System and its performance. The synthetic utterances are built by concatenating different speech units selected from recorded database from the training session for concatenative speech synthesizer system. Here training based means any person can train his/her voice and that will be stored on database and next time that person will input a text to convert speech and listen according to his/her trained voice. So this process is known as independent voice. And to train the voice a set of Bengali keyword is stored on the database as segmented audio file. At last the performance of this Bangla speech synthesizer system implemented by the concatenative speech synthesizer technology is analyzed which has provided 85% accuracy to listener to identify the sentence. Keywords— TTS; Training Based Synthesizer; Bangla Speech Synthesizer; Bangla keyword set; Concatentive Synthesis. I. INTRODUCTION The most powerful and common method of human communication is the oral mode. In our daily life we communicate each other via speech. Now-a-days computer is the most vital part of our life. So it is natural for people to expect to be able to carry out spoken dialogue with computers. This involves the integration of speech technology and language technology. A text to speech synthesizer is now an important part of information technology because it has integrated language and speech for human-computer interaction. Creation of synthetic voice from text is usually referred to as the general term „text-to-speech‟ though it requires a wide range and variety of procedures. Speech synthesis is the artificial reproduction of natural speech. Spoken texts are generated by a computer. Rather than being played from a previously recorded body of texts, each sentence is individually generated [2].Speech synthesis is also known as Text-to-Speech (TTS). A text-to-speech (TTS) system converts normal language text into speech; other systems render symbolic linguistic representations like phonetic into speech [3]. Speech technology can differ in size from the stored voice. Alternatively, a synthesizer can incorporate a model of the vocal tract and other human voice characteristics to create a completely "synthetic" voice output [4]. The block diagram of TTS system is given in the figure 1. Various TTS (Text-to-Speech) works have already been done for different language like English, Arabic, France, Turkey, and German and so</s>
|
<s>on [1, 2, 7, 8, 9, and 11]. Although different TTS systems have been introduced in different languages efficiently but in Bangla, TTS system is not so rich. Figure 1: A general speech synthesizer model A large database is needed to build a TTS system. So it‟s very difficult to build a TTS system. In past an attempt was made by C-DAC, Kolkata which had developed a complete Bangla TTS system named Bangla Vaani [10]. Very recently, CRBLP of BRAC University has released another Bangla TTS, Katha [12], which is built under the Festival framework using unit selection. A complete system has been shown here [12]. In this paper, Bangla Speech Synthesizer System is implemented by concatenative synthesizer technology. Here a system is represented which can convert input text to speech. The text contains character which needs to convert as normal text. In concatenative speech synthesis, a set of recorded speech units are selected from a database and are concatenated to create synthetic utterances [8]. This database contains prerecorded voice according to the keyword. II. CONCATENATIVE SYNTHESIZER OVERVIEW Several types of synthesizer technologies exist on Text-to-Speech system like concatenative synthesis, formant synthesis, HMM (Hidden Markov Model) based synthesis and sine wave synthesis [1], etc. The formant synthesis uses fundamental frequency, voicing, noise levels instead of human speech samples to create a synthetic waveform of speech and the concatenative synthesis uses segments of recorded human speech [2]. HMM-based synthesis is a synthesis method based mailto:fmahmud.ruet@gmail.comhttp://en.wikipedia.org/wiki/Symbolic_linguistic_representationon hidden Markov models. In this system, the frequency spectrum, fundamental frequency and duration of speech are modeled simultaneously by HMMs. An overview of HMM based synthesis is shown in [13-14]. Sine wave synthesis is a technique for synthesizing speech by replacing the formants (main bands of energy) with pure tone whistles. Concatenative synthesis is based on the concatenation of segments of recorded speech. It uses large databases of recorded speech. During creation of database, each recorded utterance is segmented into some or all of the following: individual phones, diphones, half-phones, syllables, morphemes, words, phrases which are known as keywords. Speech synthesized technology can be created by concatenating number of recorded voice (according to the keyword) that are stored in a database as audio file. A. Keyword A keyword is a basic unit of a word. It is a unit consisting of uninterrupted sound that can be used to make up words. So, a Keyword is a unit of organization for a sequence of speech sounds. In English it is known as „syllable‟. For example, the word “ ” consist of two keywords, one is “ ” and other is “ ”. B. Classification of Bangla keyword According to the combination of number of letters ( ড ) Bangla keywords can be classified by the following way which is shown in figure. Figure 2: Classification of Bangla keyword 1) Independent keyword: If a keyword is constructed by only one letter then it is known as independent keyword. There are two kinds of independent keywords. a) Vowel( ড):</s>
|
<s>A speech sound that is produced by comparatively open configuration of the vocal tract, with vibration of the vocal cords but without audible friction and is a unit of the sound system of a language that forms the nucleus of a syllable. For example, অ, আ, ই, ঈ, উ, ঊ, ঋ, এ, ঐ, , । b) Consonant( ন ড): A basic speech sound in which the breath is at least partly obstructed and which can be combined with a vowel to form a syllable. For example, ও, ঔ, ক, খ, গ, ঘ, ঙ, চ, দ, ছ, জ, and so on. 2) Dependent keyword: If a keyword is constructed by one or more consonant with combining kar(ও ) (smallest term of vowel. i.e, and like this ) or fola( ) (smallest term of consonant. i.e, ; here ও is fola) then it is known as dependent keyword. Dependent keyword can be classified in the following way. a) Modifier Character: A keyword that is constructed by one consonant with kar (ও )is known as modifier character. For example, ও , ঠ, ঢ , , ন , , ক , জ and like this. b) Compound Character: A keyword which is the combination of two or more consonants is known as compound character. For example, , , , , , , , , and link this. c) Complex Character: If a keyword is the combination of both modifier character and compound character then it is called complex character. For example, and so on. C. Bangla keyword set There are many keywords in Bangla language. From the anlaysis of Bangla literacy, we have found about 1200 keywords. To detect the keywords we have used following four Bangla literacy books: দন(Riktar Badon) Written by ও চ নচ ই (Kazi Nazrul Islam); দ ক ন ন (Durgashnandini)written by ঘ ঘ (Bumkimchandro Chittopadhai); (Shas Prasno) written by ঘ ঘ (Shratchandro Chittopadhai); খন দ ও (Magnabod Kabbo)written by ই ও দন দ (Maikal Modhosudon Dotto). Following section shows the sample keywords set for Bangla language. 1) For Vowel set: There are 11 charcters in vowel set. They are অ আ ই ঈ উ ঊ ঋ এ ঐ 2) For Consonant set: There are 35 charcters in consonant set. They are ও ঔ ক খ গ ঘ ঙ চ ছ জ ঝ ঞ ট ঠ ড ঢ ণ দ ন ভ য হ ড় ঢ় 3) For Numeric set: There are 10 charcters in numeric set. They are ০ ১ ২ ৩ ৪ ৫ ৬ ৭ ৮ ৯. 4) For Modifier Character set: In our Bangla language there are huge collection of modifier characters. Collection of all modifier characters is called modifier character set. Modifier character set is shown in the following table (Table- 1). Table 1: Modifier Character set sample For letter Possible character set ক ও ও ও ও ও ও ও ও ও ও ও খ ঔ ঔ ঔ ঔ ঔ ঔ ঔ ঔ ঔ ঔ ঔ ঔ গ ক ক</s>
|
<s>ক ক ক ক ক ক ক ক ক ক ঘ খ খ খ খ খ খ খ খ খ খ http://en.wikipedia.org/wiki/Concatenationhttp://en.wikipedia.org/wiki/Databasehttp://en.wikipedia.org/wiki/Syllablehttp://en.wikipedia.org/wiki/Wordhttp://en.wikipedia.org/wiki/Phrasehttp://en.wikipedia.org/wiki/Speech_communicationখ খ খ ঙ None চ ঘ ঘ ঘ ঘ ঘ ঘ ঘ ঘ ঘ ঘ ঘ ছ ঙ ঙ ঙ ঙ ঙ ঙ ঙ ঙ ঙ ঙ জ চ চ চ চ চ চ চ চ চ চ চ চ ঝ ছ ছ ছ ছ ছ ছ ছ ছ ছ ছ ঞ জ জ জ য য য য য য য য য য য য য হ হ হ হ হ হ হ হ হ হ হ হ ড় ড় ড় ড় ড় ড় ড় ড় ড় ঢ় ঢ় ঢ় ঢ় ঢ় ঢ় ঢ় ঢ় 5) For Compound Character set: Like modifier Characters there are huge collection of compound characters in our Bangla language. Collection of all compound characters is called compound character set. Compound character set is shown in the following table (Table 2). Table 2: Compound character set sample For letter Possible character set খ ঔ ঘ খ ছ ঙ জ চ চ চ ঝ ছ ম ও ক য None র None ঔ খ গ ঘ চ ছ ঞ ড ন ল ও ঔ ক খ গ ছ ঝ ঞ ট দ হ ঢ ঘ ক গ চ ছ জ ঞ ট ঠ ড দ ড় None ঢ় None None 6) For punctuation character set: Punctuation marks are symbols that indicate the structure and organization of written language, as well as intonation and pauses to be observed when reading aloud. Every symbol has a predefined meaning. If we use punctuation character in a wrong place then the meaning of that sentence may significantly be changed. Collection of all punctuation characaters is called punctuation characater set. Punctuation characater set is shown by the following table (Table 3). Table 3: Punctuation character set sample Punctuation Means Description । Full-stop To stop the sentence ।। Twice-stop To repeat the sentence twice , Coma To stop a few time within the sentence ; Semicolon To stop few time within the sentence :- Colon-Dash To show the example for a topic ? Question Mark To question of a sentence ! Exclamation mark To exclamation of a sentence - Hipen To combine different between two sentences / Forward Slash Means or operation between two words “ Quotation start To start the quotation ” Quotation end To end the quotation ( First bracket start To start the first bracket ) First bracket close To close the first bracket D. Training Keyword modeling By modeling a sequence of sentences, keywords can be separated. If the word of a sentence is consist of only two keywords that can be easy to separate. As a result, here each word of a sentence is consisting of only two keywords. For example, “আ ভ ঢ ঔ ই”| this sentence has three words: আ , ভ ঢ and ঔ ই| from</s>
|
<s>these words, easily keywords can be detected. আ word consists of two keywords আ & | respectively, ভ ঢ and ঔ ই words consist of ভ , ঢ, ঔ , ই | So to modeling the training keywords, we can create a passage that each word is consisting of only two keywords. And then separate each keyword from the word of the passage. E. Recording voice Any person who wants to create his/her voice to convert Text-to-Speech (TTS) first needs to login then reads the passage (here, passage contains all bangle keywords set) and corresponding audio file will be stored on the database. Special notification is that each word for the passage is shown one by one, so starting and ending position of each audio file for a word can easily be detected. F. Segmentation of recording voice From each word of corresponding audio file keyword can be separated by the signal ratio analysis. For example, the word “আ ” has two keywords, আ and | Here from the audio file of “আ ”, its audio signal ratio of আ: =2:1. That means if the total audio file length of আ is 1000ms then first 666.67ms audio signal is keyword for “আ” and next 333.33ms audio signal is keyword for “ ” | An audio file for each keyword from the segmenting portion for each word has been created. Figure 3 is shows the original speech signal “আ ” and figure 4 is shows segmentation of the original signal. G. Creating database Database has been created by collecting all keywords of audio file from segmented recording voice. Database is a collection of audio files for all of the Bangla keywords set. That database is used for retracting keyword audio file corresponding to the input string of keyword. For .NET framework resource folder or any database server (i.e. SQL server) can be used to store keyword audio file or database. III. STEPS FOR SYNTHESIZER MODEL A text-to-speech system (or "engine") is composed of two parts [5] a front-end and a back-end. The front-end converts inputted text into the equivalent keyword set. This process is known as text normalization, pre-processing, or tokenization. The front-end then assigns phonetic transcriptions to each word and divides and marks the text into prosodic units, like phrases, clauses, and sentences. The progression of conveying phonetic transcriptions to words is called text-to-phoneme. Phonetic transcriptions and prosody information together make up the symbolic linguistic representation that is output by the front-end. The back-end often referred to as the synthesizer then converts the symbolic linguistic representation into sound. In certain systems, this part includes the computation of the target prosody (pitch contour, phoneme durations) [6] which are then imposed on the output speech. Figure 3: Original speech signal for “আ ” Figure 4: Splitting two keywords “আ” and ” ” from original signal “আ ” A. Text as input A Bangla text is inputted to convert the Speech. For example, Input “আ দ দ ন দ ।এই দ আঢন ১৪৭৫৭০</s>
|
<s>ক ও ঝ । ও ঢও ন ওঢ ন এই দ দ ও দ ঢ ও দঔ য চ ঞ, , ঢ ড ভঢ অ ড আ - এ ন ভ ।” as a text. B. Text normalization rule From the inputted text the string is normalized according to the punctuator and thus at first text is divided into three parts according to the fullstop (দ ড়). First part is “আ দ দ ন দ ”, second part is “এই দ আঢন ১৪৭৫৭০ ক ও ঝ ” and third part is “ ও ঢও ন ওঢ ন এই দ দ ও দ ঢ ও দঔ য চ ঞ, , ঢ ড ভঢ অ ড আ - এ ন ভ ”. The first and second parts contain no punctuator further within them but the third part contains some punctuators (i.e, ও (,) and হ ই ন(-)). So third part will be divided according to the http://en.wikipedia.org/wiki/Front-endhttp://en.wikipedia.org/wiki/Back-endhttp://en.wikipedia.org/wiki/Prosody_(linguistics)http://en.wikipedia.org/wiki/Phrasehttp://en.wikipedia.org/wiki/Clausecoma (ও (,)) and hipen(হ ই ন(-)) which contains another four parts. First part is: ও ঢও ন ওঢ ন এই দ দ ও দ ঢ ও দঔ য চ ঞ, Second part is: , Third part is: ঢ ড ভঢ অ ড আ , and last part is: এ ন ভ . Total text normalization process is shown in figure 5. Figure 5: Input text normalization process. Again to convert the number, each is placed according to its degree. Then it is converted from number to word. For example, ১৪৭৫৭০ will be converted to “এও ঢঘ হ চ ঘ ঢ ” C. Text-to-keyword rule Speech synthesis technology uses text-to-keyword rule to determine the pronunciation of a word based on its spelling which is often called text-to-phoneme or grapheme-to-phoneme conversion (phoneme is the term used by linguists to describe distinctive sounds in a language). So normalized text is split to generate a set of keywords. For example, a normalized text is “আ দ দ ন দ ”. So the keyword set is generated from the normalized text as following: আ দ [] দ [] ন [] দ Here, [] represent the space. D. Synthesizer Synthesizer is a system of taking keyword as input and returning the output as speech. To return the speech, a keyword is taken as input and searched from the database. If keyword corresponding audio file is matched from the database then the keyword‟s audio file is retracted from database. And that file is played as output. This is a continuous process to where audio file is played one by one corresponding to the keywords sequence. This is seemed to be speech corresponding to the text. IV.SYNTHESIZER COMPLEXITY ANALYSIS Several types of complexities are associated with the concatentive speech synthesizer technology which are described in the following sections. A. Ratio problem to segmenting recorded voice For segmenting recorded voice, we used the ratio between two keywords. But in this process we cannot fully detect each keyword starting and ending point from the audio file. We considered the</s>
|
<s>ratio 2:1 for segmenting a recorded word. The following table (Table 4) shows the file segmentation example for various keywords. Here, L represents total length of the audio file, LA represents length of a keyword after segmentation and LO represents length of a keyword before segmentation. Figure 6 shows error rate for different keywords before and after segmentation. Table 4: Time variation for different keywords before and after segmentation Word (ল ) Total audio file length L(ms) 2:1 ratio length for letter “আ” LA(ms) Original length for “আ” LO (ms) Error(%) |LA-LO| ---------- আ ম 1120 746.67 810 5.65% আজ 960 640 710 7.29% আর 1050 700 670 2.85% আম 940 626.67 690 6.73% Figure 6: Error rate for different keywords before and after segmentation. http://en.wikipedia.org/wiki/Spellinghttp://en.wikipedia.org/wiki/PhonemeFrom the table (Table 4) it is easy to understand that a letter audio file length can be varied according to different keywords. B. Speaker variation problem If we use ratio only for segmenting keywords then segmentation process can‟t properly divide each keyword because the tone of a word can differ from person to person. As a result, the resultant output speech can be little bit unnatural. For example, the following table (Table 5) shows how audio file length varies for same word “আচ”. Figure 7 shows error rate for different speakers for same word “আচ”. Table 5: Time variation for different speakers Speaker Audio file length for Speaker S (ms) 2:1 ratio length for letter ”আ” SA (ms) Original length for letter “আ” SO (ms) Error(%) |SA-SO| --------- Speaker_1 1150 766.67 820 4.63% Speaker_2 980 653.34 690 3.74% Speaker_3 1070 713.34 780 6.22% Speaker_4 1020 680 750 6.86% Figure 7: Error rate for different speakers for same word “আচ”. C. Confusing letter problem Some types of Bangla keyword utterance can‟t be detected properly which are called the confusing letters. For example, some confusing letters are হ ), ঘ ), ক ) ।For example, the word ঢ and ঢ both utterance are same. So the keyword ঢ and ঢ has no difference. D. Number to Text Conversion problem Number conversion is another problem in TTS systems. A TTS system often infers how to expand a number based on surrounding words, numbers and punctuation and sometimes the system provides a way to specify the context if it is ambiguous. E. Universal Keyword Set The consistent evaluation of speech synthesis systems may be difficult because of a lack of universally agreed objective evaluation criteria. Different organizations often use different speech data. The quality of speech synthesis systems also depends to a large degree on the quality of the production technique (which may involve analogue or digital recording) and on the facilities used to replay the speech. Evaluating speech synthesis systems has therefore often been compromised by differences between production techniques and replay facilities. V. PERFORMANCE In spite of large improvements, Speech Synthesis can still sound a little unnatural. Since HMM-based TTS system can be applicable to pretense against speaker verification systems due to its having an ability</s>
|
<s>to synthesize speech with arbitrarily given text and speaker‟s voice characteristics. The performance of this Bangla speech synthesizer system was measured by listening tests. Listening of single vowel, consonant and digits was fully identified. So in this case it provides 100% accuracy to identify the words correctly. But in another case, when we took a sentence as an input (text) to synthesis that time listener couldn‟t fully identify all of the words. It was found that the listeners identified about 85% of the words correctly from the text. Figure 8: Accuracy comparison to identify words or sentences from input. VI. CONCLUSION Concatenation speech synthesis algorithm worked by the units recorded in different phonetic contexts has to reduce the audio waveform discontinuities and the phantom mismatches at the borders. The prosodic outline of the units is modified to the desired qualifications given by a prosody making block whose input is the text to be well-defined by the system. In this work, we developed a Bangla speech synthesizer system for computers and present the implementation process and performance of this Bangla speech synthesizer system that is implemented by the concatenative speech synthesis architecture. Our goal is to develop a Bangla text-to-speech (TTS) application that can produce almost real-time speech efficiently from the input text for human-computer interaction. But concatenative speech synthesis technology is no longer providing the great accuracy for the text-to-speech model. On the other hand, Bangla speech training model has some lacks to develop the Bangla synthesizer model. So, in future we will try to improve the accuracy about this Bangla speech synthesizer model or Text-to-Speech model that will be implemented by the concatenative speech synthesizer technology. References [1] Dutoit, “An Introduction to Text-To-Speech Synthesis,” Kluwer Academic Publishers, 1997. [2] Carvalho, P., Trancoso, I.M., and Oliveira, L.C.,"Automatic Segment Alignment for ConcatenativeSpeech Synthesis in Portuguese", Proc. of the 10thPortuguese Conference on Pattern Recognition,RECPAD'98, pages 221-226, Lisbon, March, 1998. [3] Allen, Jonathan, Hunnicutt, M. Sharon, Klatt, and Dennis (1987), “From Text to Speech: The MITalk system,” Cambridge University Press. ISBN 0-521-30641-8. [4] Rubin, P.; Baer, T.; Mermelstein, P. (1981), “An articulatory synthesizer for perceptual research,” Journal of the Acoustical Society of America 70 (2): 321–328. DOI:10.1121/1.386780. [5] Van Santen, Jan P. H. Sproat, Richard W. Olive, Joseph P. Hirschberg, and Julia (1997), “Progress in Speech Synthesis,” Springer. ISBN 0-387-94701-9. [6] Van Santen, J. (April 1994), “Assignment of segmental duration in text-to-speech synthesis,” Computer Speech & Language 8 (2): 95–128. DOI:10.1006/csla.1994.1005. [7] Bernd M Obius and Jan P. H. van santen, “Modeling Segmentation Duration in German Text-to-Speech Synthesis,” In International Conference on Spoken Language Processing (ICSLP), pages 2395{2399, Philadelphia, 1996. [8] Daniel Erro, Asunción Moreno, and Antonio Bonafonte, “Flexible Harmonic/Stochastic Speech Synthesis,” Proc. 6th ISCA Speech Synthesis Workshop, 2007. [9] M. Beutnagel, A. Conkie, J. Schroeter, Y. Stylianou, and A. Syrdal, The AT&T next-gen TTS System. Online::http://www.research.att.com/projects/tts, 26th, Jan, 2010. [10] C-DAC: “Research & Development-Speech Research,” Online:www.kolkatacdac.in/html/texttospeech.htm, 26Jan, 2010. [11] Marc Schr ¨ oder and J¨ urgen Trouvain, The German Text-to-Speech Synthesis System</s>
|
<s>MARY:A Tool for Research,” Development and Teaching. Institute of Phonetics, University of the Saarland, Saarbr ¨ ucken, Germany. [12] Firoj Alam, S.M. Murtoza Habib, and Mumit Khan, “Text normalization system for Bangla,” Proc. of Conf. on Language and Technology, Lahore, pp. 22-24, 2009. [13] T. Yoshimura, K. Tokuda, T. Masuko, T. Kobayashi, and T. Kitamura, “Simultaneous Modeling of Spectrum, Pitch and Duration in HMM-Based Speech Synthesis,” Proc. of EUROSPEECH, vol.5, pp.2347–2350, 1999. [14] T. Yoshimura, K. Tokuda, T. Masuko, T. Kobayashi and T. Kitamura, “Speaker Interpolation in HMM-Based Speech Synthesis System,” Proc. of EUROSPEECH, vol.5, pp.2523–2526, 1997.</s>
|
<s>Preparation_InstructionI.J. Information Engineering and Electronic Business, 2019, 2, 1-9 Published Online March 2019 in MECS (http://www.mecs-press.org/) DOI: 10.5815/ijieeb.2019.02.01 Copyright © 2019 MECS I.J. Information Engineering and Electronic Business, 2019, 2, 1-9 Text to Speech Synthesis for Bangla Language Khandaker Mamun Ahmed Department of Computer Science and Engineering, BRAC University, Dhaka, Bangladesh Email: mamun.ahmed@bracu.ac.bd Prianka Mandal Department of Software Engineering, Daffodil International University, Dhaka, Bangladesh Email: prianka.swe@diu.edu.bd B M Mainul Hossain Institute of Information Technology, University of Dhaka, Dhaka, Bangladesh Email: mainul@iit.du.ac.bd Received: 13 September 2018; Accepted: 14 December 2018; Published: 08 March 2019 Abstract—Text-to-speech (TTS) synthesis is a rapidly growing field of research. Speech synthesis systems are applicable to several areas such as robotics, education and embedded systems. The implementation of such TTS system increases the correctness and efficiency of an application. Though Bangla is the seventh most spoken language all over the world, uses of TTS system in applications are difficult to find for Bangla language because of lacking simplicity and lightweightness in TTS systems. Therefore, in this paper, we propose a simple and lightweight TTS system for Bangla language. We converted Bangla text to Romanized text based on Bangla graphemes set and by developing a bunch of romanization rules. Besides, an xml-based data representation is developed as a feature of the system. It gives the flexibility to modify the data representation, parsing data and create speech based on one’s own dialect. Our proposed system is very lightweight which takes less processing time and produces a good understandable speech. Index Terms—Synthesis, normalization, dialect, diphone, concatenation, tokenization, romanization. I. INTRODUCTION Software systems have become an inevitable part of our daily life. Nowadays, the usage of software is tremendously increasing day by day. With the demand for different kinds of software systems, text-to-speech (TTS) synthesis system has come forward. There are hundreds of areas where TTS systems are very much important such as robotics, warning system, alarm system, email reading, human-computer interaction and especially for people with visual impairment and dyslexia. Considering the necessity of such systems, many popular technological organizations such as Mattel, SAM, Atari, Apple, Microsoft Windows, Amaga OS, Texas Instruments TI-99/4A offer speech synthesis as a built-in capability [23,24]. A TTS system converts natural language text into speech and then, a computer system able to read text aloud. A speech synthesizer converts written text to a phonemic representation and then converts the phonemic representation to waveforms which can be output as sound. There are several ways to create synthesized speech. Among them, concatenative synthesis and formant synthesis are very popular. Concatenative synthesis is based on concatenating pre-recorded speech of phonemes, diphones, words or phrases. Concatenative synthesis produces the most natural sounding synthesized speech because of its use of pre-recorded data. Formant synthesis makes the timbre of a voice or instrument consistent over a wide range of frequencies and generates artificial, robotic sounding speech. In this paper, we are using a concatenative synthesis technique to generate natural sounding speech. Bangla is one of the most important Indo-Iranian languages which</s>
|
<s>is the seventh most popular language in the world and spoken by a population that now exceeds 250 million [16]. Bangla is the primary spoken language in Bangladesh and the second most spoken language in India [4]. Several researches were conducted in Bangla speech synthesis but these are not enough to build a complete TTS system. Sometimes a large lexicon is necessary to design a TTS system which needs long processing time [6]. Bangla language has 50 alphabets and the English language has 26 alphabets, which is almost half of Bangla alphabets. Taking this into concern, we translated Bangla text to English to reduce the processing time and to be able to use the existing English phone set to generate Bangla speech. There is some text to speech synthesis engines available nowadays. Among them, festival is an open-source extremely flexible concatenative TTS engine which uses diphones or other units to generate synthesized speech [21,25]. It uses Bangla lexicon to produce Bangla speech [1,6]. Festival is a large system with slow compilation process and high runtime memory 2 Text to Speech Synthesis for Bangla Language Copyright © 2019 MECS I.J. Information Engineering and Electronic Business, 2019, 2, 1-9 requirement [2,22]. Flite is another speech synthesizer which considers the size and performance on embedded platforms that reduces its flexibility [7]. Considering the performance issue and flexibility, another speech synthesizer FreeTTS is developed based on the two speech synthesizers. FreeTTS uses algorithms of Flite and the architecture of Festival. It is found that FreeTTS runs two to three times faster than Flite [7]. This paper presents a Bangla text-to-speech synthesis system which is flexible, needs small processing time and produces a good understandable speech. Besides, we developed an intermediate XML based data representation feature which will help users to create speech based on their own dialect. It reduces to know the technical details to synthesize speech. To the best of our knowledge, ours is the first work on synthesizing Bangla speech using English diphone set that reduces the processing time for synthesization. The rest of the paper is organized as follows: section II presents several existing works regarding text to speech synthesis system. Section III presents the proposed approach of text-to-speech synthesis system. Section IV discusses the experimental results. Finally, conclusions of this work and suggestions for future work are summarized in section V. II. BACKGROUND STUDY Developing a text-to-speech synthesis system is a challenging task. There are many stages such as text normalization, text-to-phonemes conversion, prosodic emotional content detection, and speech synthesis are needed to accomplish to develop a complete TTS system. Plenty of research works have already been proposed in Speech synthesis for different languages. Some early researchers tried to build machines to emulate human speech, long before the invention of electronic signal processing. In 1779 speech synthesis has come under the light when models of the human vocal tract were built that could produce the five vowel sounds (in International Phonetic Alphabet notation: [a], [e], [i], [o] and [u]) [8].</s>
|
<s>A pitch synchronous waveform processing technique for text-to-speech synthesis using diphones was presented in [19]. In this paper, several algorithms were reviewed in a common framework to improve the voice quality of a text-to-speech synthesis system. The framework was developed based on acoustical units concatenation technique [19,20]. A German text-to-speech synthesis system, MARY was proposed by Schröder, Marc, and Trouvain [17]. The systems main features are a modular design and an XML-based internal data representation. It allowed the user to access and modify the intermediate processing steps without having a technical understanding of the system. Though research in text to speech synthesization for western languages has reached in a good position but for Bangla language that is very few such as [1,10,11,14]. The work reported in F. Alam et al. developed a speech synthesizer for Bangla language [1,6]. This system is developed using diphone concatenation approach. It needs a lexicon with its pronunciation to produce speech. The lexicon contains ninety-three thousand entries [6]. The proposed system creates voice data for festival and additionally extends festival using its embedded scheme scripting interface to incorporate Bangla language support. It translates Bangla unicode text to ASCII according to Bangla phone set. However, there is no description of how the transliteration process works. Moreover, there is no description about letter-to-sound (LTS) rules developed for words that are absent in the lexicon. Concatenative speech synthesis system based on Epoch Synchronous Non OverLap Add (ESNOLA) technique for Bangla text to speech synthesis is discussed in [10,11]. The ESNOLA algorithm is developed for concatenation, regeneration as well as for pitch and duration modification. Preprocessing module creates partnames database from the pre-recorded natural speech signals, text analysis module accepts input text and generates phoneme string and stress marker and synthesizer module generates speech through combining the slices of pre-recorded speech. PDF text to speech conversion process is discussed in [9] where other tried to analysis sentiment from Bangla text [3]. PDF represents different types of data as objects such as text object, image object and multimedia object [9]. The pdf to unicode text conversion process extracts texts from pdf objects and unicode text to speech conversion process produces speech. Every language has standard and non-standard words. To generate speech all the non-standard words should be converted to their correct pronounceable form. There are several ways to identify and normalize non-standard words. Some researchers have identified several semiotic classes like text normalization [12,13]. Regular expressions were written in .jflex format to recognize each semiotic class. And a set of rules were used for tokenization and verbalization. Another approach used decision tree and decision list for disambiguation [14]. Though some works have been done in this domain, but still there are some problems which need to be accomplished to get a good quality sound. III. METHODOLOGY To synthesize speech from text, we proposed a text-to-speech synthesis system for Bangla language. The overall architecture of the proposed system is given in Figure 1 where we have normalized, tokenized, romanized and synthesized</s>
|
<s>the input text. Text to Speech Synthesis for Bangla Language 3 Copyright © 2019 MECS I.J. Information Engineering and Electronic Business, 2019, 2, 1-9 Fig.1. Architecture of Bangla text to speech synthesis system IV. TEXT NORMALIZATION A text document contains not only full words but also various other language units such as numbers, dates, symbols, and currency. While speech can be synthesized from full words directly (subsection III), all the other language units must be first consistently expanded into full words before they get synthesized. The language unit conversion process which takes place internally is called text normalization. Table 1 contains the list of language units along with their expanded format. In the following subsections, the normalization process of some language units is discussed. A. Number normalization Number is a mathematical notation to count, measure or label. Bangla numerals system has ten digits: ০, ১, ২, ৩, ৪, ৫, ৬, ৭, ৮, ৯ like Hindu–Arabic numeral system [15]. There are hundred numerals started from zero (0) to ninety-nine (99) (table 2). For numbers above 99 there are five main systems for naming numbers in Bangla (table 3). If a text has only digits (০-৯) or digits separated by a comma (“,”), then it will be recognized as a number. After identifying the language unit as a digit, the following procedure converts a number to its pronounceable form. 1. Firstly, Bangla number is converted to the English number. This process works by replacing Bangla digits to the corresponding English digits. And, the relationship between Bangla and English digits is: ০->0, ১->1, ২->2, ৩->3, ৪->4, ৫->5, ৬->6, ৭->7, ৮->8, ৯->9. 2. After converting a number from Bangla to English, it is checked if the number is less than “১০০” (100). If the number is less than all number units, then the number’s corresponding pronounceable form is taken from Bangla numerals (table 2). But, if the number is greater or equal to a number unit (descending order), the number is divided by that unit and the quotient and remainder is calculated. 3. The calculated quotient and remainder are checked whether it is zero or not. If quotient or remainder is not equal to zero, it is passed again to process 2. And the units Bangla pronounceable form is added to the pronounceable text. For example, “১০০৩৩২” is a Bangla number and converted to English number 100332. 100332 is greater than number unit 100000. Therefore, 100332 is divided by 100000 and the remainder is 332 and quotient is 1. The quotient is not equal to zero and less than 100, therefore, the pronounceable form now is “এক লক্ষ” (one hundred thousand) (quotient + units pronounceable form). However, the remainder is greater than unit 100. Therefore, the remainder is divided by 100 and again the quotient 3 and remainder 32 is calculated. Now, both are less than 100, therefore the pronounceable form of the quotient is “তিন শি” (3 hundred) and the remainder is “বত্রিশ” (thirty-two). Finally, “১০০৩৩২” will be pronounced to “এক লক্ষ তিনশি</s>
|
<s>বত্রিশ ”. B. Date normalization According to the national and official Calendar of Bangladesh, the date format is “দদ-মম-বববব” (dd-mm-yyyy). A text is identified as a date unit, if it contains a one to thirty-one, following a separator and a one- or two-digit number denoting a month ranged from one to twelve with the same separator and a two- or four-digit number denoting a year. People also use some other types of date formats like “Day number – month name – year” for example “২ জলুাই ২০১৭” (2 July 2017). These types of dates can be one- or two-digit number denoting a day ranged from identified if the text contains a one- or two-digit number, a separator, a text denoting the month and a four-digit number denoting the year sequentially. Table 1. Language units with their expanded format Language unit Non-standard format Expanded format Cardinal number ১১৩০ একহাজার একশত ত্রিশ Fractional number ১০৫.০২ একশত পাাঁচ দশতমক শূন্য শূন্য দুই Ordinal number ১ম, ২য় প্রথম, দ্বিতীয় Date ০২/০৫/২০১৬ or ২ মম ২০১৬ দুই মে দুই হাজার ষোল Phone number +৮৮০১৮১৩৩৭৫১৮২ +৮৮০-৯৬৩৯৫৩৭১৯০ ধনাত্বক আট আট শূন্য এক আট এক তিন তিন সাত পাাঁচ এক আট দুই or or ধনাত্বক আট আট শূন্য নয় ছয় তিন নয় পাাঁচ পাাঁচ তিন সাত এক নয় শূন্য Range ১০-১২ দশ থেকে বার Roman numerals I, II, III, IV, V প্রথম, দ্বিতীয়, তৃতীয়, চতুর্থ,পঞ্চম.. Time ১২:৩০:১৫ বারটা ত্রিশ মিনিট পেনেরা পেনেরা সেকেন্ড Unit and measurement °, ’, ”, % ডিগ্রি, মিনিট, সেকেন্ড, শতাাংশ There are four date component separators which are the followings. 4 Text to Speech Synthesis for Bangla Language Copyright © 2019 MECS I.J. Information Engineering and Electronic Business, 2019, 2, 1-9 1. “/” – stroke (slash) 2. “.” – dots or full stops (periods) 3. “-” – hyphens or dashes 4. “ ” – spaces After identifying the date, we have converted it to its pronounceable format. We have separated the date unit as day, month and year. The day is expanded using the number normalization algorithm. Bangla calendar has twelve months in a year like English calendar system. If the month is a text like “জলুাই” (July), it remains as it is. If the month is a number, we replaced the number by the corresponding month text. For Bangla year there are two formats to pronounce a year. If the year is less than one thousand or not thousands such as 1YXX, 2YXX ... (here, x represents any digit and Y represents a digit not equal zero), it is pronounced by grouping. Table 2. Bangla numerals Table 3. Units for naming Bangla numbers Number notation Power notation English numbering unit Bangla numbering unit ১০০ 102 One hundred এক শত ১০০০ 103 One thousand এক হাজার ১০০০০০ 105 Hundred thousand এক লক্ষ ১০০০০০০০ 107 Ten million এক মকাটি Table 4. Date normalization Regular text Identified date Normalized date ১ জুলাই ২০১৬ িাতিখে নয়জন নয়জন হামলাকারী ঢাকার হতল হতল আটিসান মবকাতিখি গুতলবর্ ষণ কখি । ১ জুলাই ২০১৬ এক জলুাই দুই হাজার হাজার ষোল ২/৩/২০১৭</s>
|
<s>তারিখে বাাংলাদেশ ক্রিকেট দল শ্রীলঙ্কা সফর করে। ২/৩/২০১৭ দুই মার্চ দুই হাজার বার The last two digits make a group and the rest digits make another group and the word “মশা” (sho) is added to the last group. For example, 1971 will be pronounced as “উতনশখশা একাত্তি” (unissho ekattor). And, if the year is like 10XX or 20XX, it is converted using the number conversion algorithm. For instance, 2012 will be pronounced as “দুই হাজাি বাি” (two thousand twelve). C. Currency normalization The Bangladeshi taka (Bangla: `িাকা') is the currency of People’s Republic of Bangladesh and its sign is `৳'. There are two ways to represents currency or amount of money. Firstly, the currency sign `৳' which comes before a number like `৳১০০'. Secondly, when the word “িাকা” (taka) comes after a number like “১০০ িাকা” (100 taka). The currency unit needs to be normalized to the corresponding pronounceable form for both of these situations. The correct pronounceable form of `৳১০০' or “১০০ িাকা” is “এক শি িাকা”. To recognize currency from the text, we have created two currency recognition formats like: “৳ - Space - N” or “৳ - N” “N - Space – “িাকা” Here, N refers to a number. The number may have a comma (“,”) to separate special units (“শি, হাজাি, লক্ষ, মকাটি”). We have used the same algorithm to normalize currency which is used to normalize number. After recognizing a text as a currency unit, we separated the word “িাকা” or the currency sign `৳' and get the number. Then, the number normalization algorithm generates pra onounceable form of that number. Finally, the word “িাকা” is added after the pronounceable text. D. Phone number normalization A telephone number is a sequence of digits assigned to a fixed-line telephone subscriber station connected to a telephone line or to a wireless electronic telephony device such as a radio telephone or a mobile telephone or to other devices for data transmission via the public switched telephone network (PSTN) or other private networks. The subscriber phone number in Bangladesh is a unique 11-digit long number. The country calling code for Bangladesh is +880. The typical format for a mobile phone number is: “+880-1X-NNNN-NNNN” and typical format for a telephone number is: “+880-96XX-NNNNNN”. For mobile and telephone number, +880 is the a country code, X is operator code and N is subscriber number. When dialing a Bangladesh number from inside Bangladesh, the format is: 0 – operator code (X) – subscriber number (N) or 96 – operator code (X) – subscriber number (N) ০ শূন্য ১১ এগাখিা ১ এক ১২ বাি ২ দুই ১৩ মিি ৩ তিন ১৪ ম ৌদ্দ ৪ াি ১৫ পখনি ৫ পাাঁ ১৬ মর্াল ৬ ছয় ১৭ সখিি ৭ সাি ১৮ আঠাখিা ৮ আি ১৯ উতনশ ৯ নয় ২০ তবশ ১০ দশ ২১ একুশ ... ... ... ... ৯৪ চুরানব্বই ৯৭ সাতানব্বই ৯৫ পাঁচানব্বই ৯৮ আটানব্বই ৯৬ ছিয়ানব্বই ৯৯ নিরানব্বই Text to Speech Synthesis for Bangla Language 5 Copyright © 2019 MECS</s>
|
<s>I.J. Information Engineering and Electronic Business, 2019, 2, 1-9 If a text has +880 following 1 or 96 and an eight-digit number, it will be recognized as a phone number. When dialing inside Bangladesh, the country code is not necessary. In that situation, if a text has 1 or 96 following an eight-digit number, it will be identified as a phone number. Phone number is actually a sequence of digits. After identifying a text as phone number, the digits of that number are replaced with their corresponding pronounceable form. V. TOKENIZATION Tokenization is the process of demarcating and possibly classifying sections of a string of input characters. In tokenization, a given character sequence is chopped into pieces called tokens. A token is an instance of a sequence of characters in some particular document that is grouped together as a useful semantic unit for processing. Table 5. Bangla alphabets romanization Grapheme category Bangla grapheme Romanized form Vowel Vowel Vowel mark - অ - o আ াা a ই তা i ঈ া i উ াু u … … … Consonant (ব্যঞ্জনবর্ণ) ক K ে Kh গ g ঘ gh ঙ ng … … Consonant conjuncts (যুক্তবর্ণ) ক্ক kk ন্ট nt দ্ধ dh ক্ষ kkho চ্ছ cch … … Tokens are identified based on the specific rules of the lexer. Some methods used to identify tokens include: regular expressions, specific sequences of characters termed a flag, specific separating characters called delimiters and explicit definition by a dictionary. Special characters, including punctuation characters, are commonly used by lexers to identify tokens because of their natural use in writing. Like English, Hindi and other South Asian language, Bangla language also uses whitespaces to tokenize a sequence of characters into individual tokens. In this paper, the punctuation characters are used to tokenize sentences and then the sentences are further tokenized to words by a whitespace character. VI. ROMANIZATION Romanization is the representation of a script in Latin script. Bangla is a segmental writing system and its graphemes represent the phonemes. Bangla Script has 11 vowel graphemes and 39 consonant graphemes and more than two hundred consonant conjunctions. We have Romanized Bangla script according to Bangla grapheme set. Table 5 shows Bangla graphemes sets for vowel, consonant and consonant conjuncts with their corresponding Romanized from and table 6 shows the romanization process. Table 6. Bangla word romanization process Bangla word Bangla syllable Corresponding English syllable আমার আ + ম + া + র a + m + a + r দেশ দ + ে + শ d + e + s বাাংলাদেশ ব + া + াং + ল + া + দ া + দ + ে +শ b + a + ng + l + a + d + e + sh রহিম র + হ + ি + ম ro + h + i + m কোকিল ক + ো + ক + ি + ল k + o + k + i + l Based on the romanization process, each token is</s>
|
<s>romanized to Latin scripts. We have designed romanization rules based on vowel and consonant combinations. Some of the rules are described below: 1. The vowels are romanized directly according to its corresponding romanized form (table 5). 2. If a consonant is in the last position of a word, it is replaced according to its romanized form (table 5). For example, in the word “বকুল” (bokul), “ল” is a consonant which is in the last position, so ’l’ replaces the letter “ল”. 3. If a consonant is not in the last position of a word and if there is no vowel after it, ‘o’ is added along with the consonant. For example, “রহিম” is romanized as “rohim”. Here, ‘r’ is for “ি” and ‘o’ is added after it to make the pronunciation correct. 4. If the character “া ” is found in a word, the letter before and after it is taken into consideration to make a consonant conjunct and then the conjunct is looked in the Bangla alphabets romanization table (table 5). If the consonant conjunct found, its 6 Text to Speech Synthesis for Bangla Language Copyright © 2019 MECS I.J. Information Engineering and Electronic Business, 2019, 2, 1-9 corresponding romanized form replaces it. If it is not found “া ” is escaped. 5. If the character “া ” is found in the middle of a word, the consonant after “া ” is placed in its position. For example, in the word “দু ে” is pronounced as “দুেে” and its romanization form is “Dukkho”. VII. SPEECH SYNTHESIS Synthesized speech is the ultimate production of a TTS system. The text is converted to phonemes based on the phonemes database. Phoneme is the fundamental unit of sound in a language. Then the prosody analysis analyzes the prosody of the phonemes, words, and sentences to determine the appropriate prosody. The prosody and phonemes information are used to produce audio waveforms of the sentences. In this paper, to synthesize speech, we have used MBROLA voice [27]. It is a 16 kilohertz (kHz) male voice. After romanization, the text is converted to an interface named FreeTTSSpeakable. The source text that need to be spoken is first converted to it. Then the FreeTTSSpeakable is sent to voice interface which is the central processing point of speech synthesis. It takes FreeTTSSpeakable as input, converts the it into a series of utterances using the MBROLA voice and generates audio output. VIII. XML DATA REPRESENTATION Along with speech synthesis system, we have developed an xml-based data representation system depending on the language units. Each language unit is given a specific tag. After tokenization (subsection II), all the tokens are given their corresponding language units tags such as date, time, word and currency. We have used regular expression for the purpose of langauge unit identification. This is an intermediate data representation where users can modify the data representation without knowing the technical details of the system and can generate speech based on their own dialect from the xml. Besides, users</s>
|
<s>can parse data from the xml data tree (Fig.2). For example, if a user wants to get all the date, by parsing the xml s/he can get all the date. Fig.2. XML data presentation IX. EXPERIMENTAL EVALUATION To produce output speech, input text is taken from different sources. The sources of input texts are daily newspaper, poem and short stories. We considered the most popular daily newspaper prothom-alo, the famous poem of Bangla literature “কানা বগ ি ছা” and Bangla short story “Chuti” (“ছুটি”) [27], written by the Nobel laureate Rabindranath Tagore. In table 7, the input text along with its corresponding romanized form is shown. To get the result, we have selected two groups of people. One is graduate students and the other group of people is senior citizens. We let them to hear the produced speech and write down the words they have heard. Moreover, result shows that graduate students are more attentive and understand more words than senior citizens. The result shows that graduate students understand clearly 68% of the produced speech where senior citizens understand 60% of the produced speech (Table 8). Senior citizens understand less of the produced speech, because of their physiological aging and changes in cognitive ability [26]. Moreover, result also varies based on the sources where the best result is found for poem and the least result is found for short stories. The average accuracy for newspaper, poem and short stories are 67.37, 71.87 and 64.52 for graduate students, and 59.6, 62.5 and 59.6 for senior citizens. Text to Speech Synthesis for Bangla Language 7 Copyright © 2019 MECS I.J. Information Engineering and Electronic Business, 2019, 2, 1-9 Table 7. Input Bangla text with corresponding romanized text Bangla text Corresponding Romanized text সাদা ও কালো । যদি বলি এই দুটি আলাদা কোনো রাং নয়, চমকে উঠতে পারেন অনেকেই । বিজ্ঞান বলছে সাদা ও কালো এই দুটি একক কোনো রাং রাং নয় । সাদা রাং তৈরি করে সূর্যরশ্মি । সূর্যের আলো যখন প্রিজমের মধ্য দিয়ে যায় , তখন লাল-সবুজ-নীল—এই তিন রাং দেখা যায় । তা অনেক অনেক আগেই প্রমাণ করেছেন বিজ্ঞানী স্যার আইজাক নিউটন । আবার আবার সাতরঙা গোলাকার শক্ত কোনো কাগজ বা বোর্ড জোরে ঘুরতে থাকলে দেখা যায় সব রাংই উধাও - সাদা একটা কিছু চোখে পড়ছে । বিজ্ঞানীরা বলেন আলোর অনুপস্থিতি হলো কালো । আবার তশল্পীদের কাছে সব রাং মিশিয়ে তৈরি হয় কালো , সাদা তো ক্যানভাস । বলা হয় এই এই সাদা ও কালো একক রাং নয় । দুটোই অনেক রঙের সমন্বয় । তাই এই এই দুটি রঙের প্রভাব হয়তো অনেক বেশি । বিশেষ বিশেষ উপলক্ষ , বিশেষ বিশেষ আবেগ বা অনুভূতির প্রকাশ করা যায় সাদা-কালো দিয়ে । পশ্চিমে পশ্চিমে নানা অনুষ্ঠান , দিন কিাংবা আয়োজনে ড্রেসকোড , কালারকোড কালারকোড থাকলেও আমাদের দেশে নির্দিষ্টভাবে সে রকম কিছু নেই । তবে উপলক্ষ , দিনের আবহ বুঝে আমরাও কিন্তু পোশাকের রাং ঠিক করি । করি । জাতীয় শোক দিবস কিাংবা একুশে ফেব্রুয়ারির কোনো আয়োজনে আয়োজনে গেলে সাদা-কালোর বাইরে খুব একটা আমরা যাই না । একুশের একুশের প্রথম প্রহরে বা ভোরের প্রভাতফেরিতে সাদা-কালো</s>
|
<s>পোশাক , খালি পা—কাউকে বলে দিতে হয় না। যে বাড়িতে শোকের ঘটনা ঘটেছে , ঘটেছে , সেখানেও দেখা যায় স্বজন হারানো মানুষেরা শোকের প্রাথমিক ধাক্কা সামলে নিজের পরনের পোশাকটি বদলে সাদা বা কালো পোশাক পরে পরে শামিল হচ্ছেন শেষযাত্রায় । আসলে পরিবেশ-পরিস্থিতি , আবেগ-অনুভূতি বলে দিচ্ছে ওই সময়ে কোন রাং থাকবে পরনে । sada o kalo. jodi boli ai duti alada kono rong noy , chomoke uthote paren onekei. biggan boloche sada o kalo ai duti akok kono rong noy.,sada rong toiri kore surjroshi. Surjer alo jokhon prijomer moddho diye jay , tokhon lal sobuj nil ai tin rong dekha jay.,ta onek agei prman korechen biggani sjar aijak niuton. kono kagoj ba bord jore ghurote thakole dekha jay sob rongoi udhao , sada akota kichu chokhe poroche. bigganira bolen alor onuposthiti holo kalo. abar iosolpider kache sob rong misiye toiri hoy kalo , sada to kanovas. bola hoy ai sada o kalo akok rong noy . dutoi onek ronger somonnoy. tai ai duti ronger prvab hoyoto onek besi. bises bises upolokkho , bises abeg ba onuvutir prkas kora jay sada kalo diye. poschime nana onusthan , din kingoba ayojone dresokod , kalarokod thakoleo amader dese nirdistvabe se rokom kichu nei .,tobe upolokkho , diner aboh bujhe amorao kintu posaker rong thik kori.,jatiy sok dibos kingoba akuse februyarir kono ayojone gele sada kalor baire khub akota amora jaina .,akuser prthom prhore ba vorer prvatoferite sada kalo posak , khali pa kauke bole dite hoy na.,je barite soker ghotona ghoteche , sekhaneo dekha jay sojon harano manusera soker prathomik dhakka samole nijer poroner posakoti bodole sada ba kalo peaosak pore samil hocchen sesojatray., asole poribes poristhiti , abeg onuvuti bole dicche oi somoye kon rong thakobe porone . ঐ দেখা যায় তাল গাছ , ঐ আমাদের গা , ওই খানেতে বাস করে কানা বোগির বোগির ছা । ও বোগি তুই খাস কি , পান্তা ভাত চাস কি , একটা যদি পাস অমনি ধরে গাপুস গুপুস খাস । oi dekha jay tal gach , oi amader ga , oi khanete bas kore kana bogir cha . o bogi tui khas ki , panta vat chas ki , akota jodi pas omoni dhore gapus gupus khas . বালকদিগের সর্দার ফটিক চক্রবর্তীর মাথায় চট করিয়া একটা নূতন ভাবোদয় হইল । নদীর ধারে একটা প্রকাণ্ড শালকাষ্ঠ মাস্তুলে রূপান্তরিত হইবার প্রতীক্ষায় পড়িয়া ছিল । স্থির হইল, সেটা সকলে মিলিয়া গড়াইয়া লইয়া যাইবে। স্থির হইল , মসিা সকলে মিলিয়া গড়াইয়া লইয়া যাইবে। যে বয্ক্তি কাঠ , আবশ্যক কালে তাহার যে কতখানি বিস্ময় বিস্ময় বিরক্তি এবাং অসবুিধা বোধ হইবে , তাহাই উপলব্ধি করিয়া বালকেরা এ প্রসত্াবে সমপূ্র্ণ অনমুোদন করিল । কোমর বাধিয়া সকলেই যখন মনোযোগের সহিত কাযে প্রবওৃ হইবার উপক্রম করিতেছে এমন সময়ে ফটিকের কনিষ্ঠ মাখনলাল গমভ্ীরভাবে সেই গড়ুির উপরে গিয়া বসিল বসিল । ছেলেরা তাহার এইরপু উদার ঔদাসীন্য দেখিয়া কিছ ুবিমর্ষ হইয়া হইয়া গেল । balokodiger sordar fotik chokrbortir mathay chot koriya akota nuton vabodoy hoilo . nodir dhare akota prkando salokastho mastule rupantrit hoibar prtikkhoay poriya chil . sthir hoilo, seta sokole miliya goraiya loiya jaibe. je</s>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.