text
stringlengths
61
23.7k
<s>in Table 1. Also note that the algorithm also calculating the frequencies of reduplicated words separately. To identify the partial reduplicated word is relatively complicated compared to proper reduplication and hence first we studied the features of partial reduplication to setup our algorithm. In earlier work some people have been used some heuristic rules. According to Bandyopadhyay [9] the partial reduplication are of three types, (i) change of the first vowel or the matra (vowel modifier) attached with first consonant, (ii) change of consonant itself in first position or (iii) change of both matra and con-sonant. They have also identified some exceptions e.g. আবল-তাবল/ aabol-taabol (irrele-vant) etc. According to the linguistic study of Chattopadhyay [6], we found the rule formation of partial reduplication i.e. the consonants that can be produced after changing are ট, ফ, ম, স. Now from the above studied and from our observation on redu-plicated words, the common features of partial reduplication are: (i) Most of the cases the length of the individual words are same e.g. কখেনা সখেনা/kakhano sakhano (sometimes) e.g. length(কখেনা) = length(সখেনা) or length of redupli-cated word is one more than the first word e.g. ধব ধেব /dhab dhabe (pure white color) where length(ধব)+1 = length(h, l) Table 2. Algorithm to find partial reduplication from corpus ALGORITHM s1: wi and wi+1 ← word from corpus s2: if (length(wi ) == length(wi )) then s3 count ← charecterWiseDifferent(wi, wi+1); s4 differentCharecterPair ← (c1, c2); //mismatch character pair in wi & wi+1 s5 if (count ==1&& (c1 & c2 both vowel modifier or both alphabet) then s6: print “re-duplication”; s7 frequency= frequency+1; s8: end if s9 else if (length(wi )+1 == length(wi+1 )) then s10: count ← charecterWiseDifferent(wi, wi+1); s11: if (count ==1) then s12: misMatchChar ←( wi , wi+1); // mismatch character s13: if (misMatchChar is vowel modifier) then s14: print “re-duplication”; s15: frequency= frequency+1; s16: end if s17: end if s18: end if A Computational Approach for Corpus Based Analysis of Reduplicated Words in Bengali 461 (ii) The difference between the reduplicated words in character wise is either a let-ter [e.g. কখেনা সখেনা/kakhano sakhano (sometimes) where difference character pair is (ক, স)] or a vowel modifier [e.g. খুচ খাচ/khuch khach (little bit) where difference character pair is (D ,{◌া)] (iii) Numbers of characters differs in one and (iv) Most of the cases this letter is a consonant of specific types like ট, ফ, ম, স, etc. Now based on these observations we have incorporated the features i.e. (i), (ii) and (iii) in our algorithm to identifying the partial reduplication and is given in Table 2. In this algorithm, the function charecterWiseDifferent(wi, wi+1) returns the number of mismatch between two words wi and wi+1 character wise and also calculating the fre-quencies of each reduplicated word. Note that, this algorithm also considering cases like wi-wi+1 (or wi - wi+1) but not shown in algorithm separately. 6 Corpus Based Study of Reduplicated Words in Bengali For the corpus based study of reduplication in Bengali, the Technology Development</s>
<s>for Indian Languages (TDIL) corpus [12] has been used. The TDIL corpus is devel-oped by the Department of Electronics, Govt. of India for Bengali language (http://tdil.mit.gov.in/). This corpus contains texts from Literature (20%), Fine Arts (5%), Social Sciences (15%), Natural Sciences (15%), Commerce (10%), Mass media (30%), and Translation (05%). Where each category has some sub categories e.g. Literature includes novels, short stories, essays etc.; Fine Arts includes paintings, drawings, music, sculpture etc.; Social Science includes philosophy, history, educa-tion etc.; Natural Science includes physics, chemistry, mathematics, geography etc.; Mass Media includes newspapers, magazines, posters, notices, advertisements etc. Commerce includes accountancy, banking etc., and translation includes all the sub-jects translated into Bengali. The size and the number of reduplicated words found using above algorithms in the corpus are given in the following table (Table 3). Table 3. Reduplicated words in TDIL corpus Corpus # Files # Sentences # Words # Reduplicated words (unique) # Frequency TDIL 1362 334260 4429574 6196 61647 The Table 3 shows that the percentage of reduplicated words in the corpus is 1.4% and at a glance it looks like quite low but while it will be consider in sentence level then it shows that, 18.44% of the sentences contain reduplicated words. Since the semantic of sentence highly depends on the presence of reduplicated word and hence this percentage shows that it cannot just ignore in any NLP application. 462 A. Senapati and U. Garain 7 Tuning Technique Though our algorithm has potential to identifying the reduplicated words compared to other existing approaches but still in order to reduce the error we have used a diction-ary and frequency based tuning technique. Table 3 shows that there are a large num-ber of reduplicated words with high frequency in the corpus. But our observation is that many of them are erroneous or not reduplicated word at all. And some of them occur with very low frequency and can be ignored without loss of generality. For example, the algorithm produces output “2424” or “1111” or “((“ as reduplicated words, since these are strings of the form “ww”, but actually not the reduplicated words. Hence in order to improve the efficiency we have used the tuning technique. Also we have used a technique to identify the reduplicated with eco words. This identification is very helpful in many NLP applications especially in MT. Frequency measure: The frequency measure is an important technique to validate the word or association of words in a corpus. The general phenomenon is that the high frequencies of two words occur together, then that is evidence that they have a special function that is not simply explained as the function that results from their combina-tion. Based on this phenomenon have we fixed a threshold frequency (Tf) and hence if the frequency of reduplicated words exceeds the threshold frequency i.e. > Tf then we only consider them in our experiment. Whereas to fix the threshold value many fac-tors has to be considered like, the size of the corpus, domain of</s>
<s>the corpus etc. Note that in our experiment we have defined Tf = 5 by applying the random sample tech-nique in the corpus. Using this technique many irrelevant entries has been eliminated. For example, িপ. িভ./p.v. (abbreviation of a name), বdৃ েবৗd /bridha boudha (irrelevant word) etc. structurally look like reduplicated but actually not. Online dictionary: In this case we have eliminated the incorrect words using an online dictionary in the following techniques. We validate “ww” in online dictionary [13] and if “ww” found a valid word in the dictionary then we reject “ww” i.e. do not consider it as a reduplicated word. For example, the algorithm will produce output “বাবা”, “দাদা”, “িদিদ”, “মামা” etc. as reduplication word, since these are strings of the form “ww”. Now, once these words are checked in online dictionary and found as valid words, they are rejected as reduplicated words. Following this method, all (erroneous words like, বাবা/baba (father), দাদা/dada (elder brother), িদিদ/didi (elder sister), মামা/mama (maternal uncle) etc. are eliminated. Identification of eco words: In case of w1-w2 form, system splits it into w1 and w2 separately and validate in online dictionary separately. If it shows that the first one i.e. w1 is a valid word but second one i.e. w2 is not a valid word then identified is as eco words. For example, a -ট /anka-tanka (maths etc.) [a /anka (maths), ট /tanka (eco word)], আtীয়-টাtীয়/aantiya-taantiya (relatives) [আtীয়/aantiya (relative), টাtীয়/taantiya (eco word)], বয্াপার-সয্াপার/bapar-sapar (matters) [বয্াপার/bapar (matter), সয্াপার/sapar (eco word)], etc. The interesting observation is that, after applying the tuning techniques the number of reduplicated words is reduced significantly and most of the erroneous entities are eliminated, and the revised result is shown in Table 4. A Computational Approach for Corpus Based Analysis of Reduplicated Words in Bengali 463 Table 4. Reduplicated words in TDIL corpus (after tuning) Corpus # Files # Sentences # Words # Reduplicated words (unique) # Frequency TDIL 1362 334260 4429574 794 37919 After tuning, Table 4 shows that the percentage of reduplicated words in the corpus is 0.71% and in sentence level then it shows that 9.4% of the sentences contain redu-plicated words. Clearly after tuning process system eliminates about 50% of redupli-cation produced by the above algorithm. Next section shows that improvement of accuracy after tuning technique. 8 Evaluation The system has been evaluated by the stratified simple random technique on the TDIL corpus. The technique is due to Sharon [11]. The technique in brief is as follows. The corpus is partitioned into non-overlapping groups and then groups are selected in random. Now from a selected group the manual output and the system output have been considered for the final evaluation. The Precision, Recall and F-score have been used as evaluation metric and result shows in Table 5. Note that, though the system is identifying the eco words separately, we are not evaluating the performance of eco word identification separately. Table 5. Result for identification Reduplicated words in TDIL corpus Corpus Precision Recall F-score Before Tuning TDIL</s>
<s>0.63 0.85 0.72 After Tuning TDIL 0.93 0.84 0.88 9 Error Analysis In order to find the weakness of our algorithms the error analysis has been carried out. This analysis not only measures the error in terms of number of wrongly identified but also identified the major source of errors in different phases of the system. Broadly we have identified the source of errors in two phases; the error generated by the system output and the error generated in tuning phase. The Table 6 and Table 7 are the confusion matrixes for identification of redupli-cated words before and after applying the tuning technique respectively. Table 6 shows that there were 45685 reduplicated words in the corpus and the system capable to capture only 38719 instances correctly and identified 22928 instances wrongly. Note that, “Actual False (X)” shown in Table 6 in first row indicates that number of non reduplicated words present in the corpus. Since, this number is not relevant in our measure and hence it is omitted and similarly the value of “true negative (X)” is also not calculated. 464 A. Senapati and U. Garain Table 6. Confusion matrix before applying tuning technique used in TDIL corpus Actual True (45685) Actual False (X) System Identified true true positive (38719) true negative (X) System Identified false false negative (6966) false positive (22928) Table 7 shows the result after applying tuning process. Note that in this table the number of actual reduplicated words is 42230 i.e. it reduces 3455 (45685 - 42230) true instances for tuning technique. Based on our observation, major contribution of this elimination is due to the instances with low frequency i.e. below threshold level. Also note that, applying tuning technique system has eliminated 20273 (true negative) false instances. In this case, the major contribution of this elimination is due to the use of dictionary entries. Note that some very common word (false instances like, বাবা/baba (father) with frequency 1342, দাদা/dada (elder brother) with frequency 327, িদিদ/didi (elder sister), with frequency 325 etc.) i.e. instances with very high frequen-cies are eliminated and results improve the system performance. The error analysis in algorithmic level is given below. Table 7. Confusion matrix after applying tuning technique used in TDIL corpus Actual True (42230) Actual False (22928) System Identified true true positive (35474) true negative (20273) System Identified false false negative (6756) false positive (2655) The error produced by the algorithms can be categorized of two types. We consider the first type is the false negative i.e. the algorithm fails to identify the redu-plicated words. Actually the algorithm is designed based on the analysis of lexical features (details is in section 5) of reduplicated words. But these features does not cover all types of reduplicated words, especially those reduplicated words, where first and second words having morphological variant. For example, the reduplicated words like মাথা মুnু/matha mundu (meaning lass), েলাটা কmল/lota kambal (belonging of poor man), etc. are not covered by the algorithms. And hence it affects the accuracy in terms</s>
<s>of recall and it reflects the recall value shown in Table 5. The other type of error is the false positive i.e. the algorithm wrongly identify the reduplicated words. For exam-ples, consider the system generated output with their frequencies, দমদম/dumdum (Dum-dum, name of a place) [frequency 50], টাটা/tata (Tata, name of a place) [frequency 50], /srisri (Mr. like term come before a name of male person) [frequency 17] etc. Clear-ly, দমদম/dumdum (Dumdum) will not be eliminated in tuning mechanism, because দমদম/dumdum (Dumdum) is not a valid word in online dictionary and since its frequen-cy greater than threshold frequency (50 > Tf = 5). Hence it contributes errors and it affects the accuracy in terms of precision and it reflects the precision value shown in Table 5. The error produced by the tuning technique also can be categorized of two types. The first type is the false negative i.e. the tuning technique elements the true redupli-cated words. This will happened because, if some reduplicated words present in the corpus with low frequency (≤ Tf). For example, consider the system generated output with their frequencies মড়মড়/marmar (sound of break) [frequency 4], িঝিরিঝির/jhirjhir A Computational Approach for Corpus Based Analysis of Reduplicated Words in Bengali 465 (sound of rain in slow motion) [frequency 4], গুিটগুিট/gutiguti (slowly) [frequency 4], etc. Though all these are valid reduplicated words but will be eliminated by tuning mech-anism because of the low frequency (≤ Tf). Obviously it affects the accuracy in terms of recall and it reflects the recall value shown in Table 5. The other error type is the false positive i.e. the tuning technique fail to eliminate the false reduplicated words. The examples describes above, like দমদম/dumdum (Dumdum) [frequency 50], টাটা/tata (Tata) [frequency 50], etc. will not eliminated by the tuning technique. 10 Conclusion This paper presents a pioneering attempt to develop a computational approach for corpus based study of reduplicated word in Bengali. The paper also shows the fre-quencies of reduplicated words and also shows that how frequently the reduplicated words are present in a corpus as well as at the sentence level. It also identified with examples that the affect of reduplication in MT system and has focused an untouched issue in Bengali-English MT. The algorithms used for identifying the reduplicated words are very simple. Though, the performances of the algorithms are not very high but after applying the tuning techniques the performance has improved to the satisfac-tory level. The error analysis part also identified the weaknesses of the system and hence there is future scope to improve the accuracy further. Acknowledgement. The authors sincerely acknowledge Prof. B.B. Chaudhuri of Indian Statistical Institute, who kindly shared his expertise on Bengali reduplicated words with the authors. References 1. Dash, N.: A Descriptive Study of Bengali Words, pp. 225–251. CUP (2015) 2. Ananthanarayana, H.S.: Reduplication in Sanketi Tamil OpiL, vol. 2, pp. 39–49 (1976) 3. Abbi, A.: Reduplicated Adverbs of Manner and Cause of Hindi. Indian Linguistics 38(2), 125–135 (1977) 4. Murthy, C.: Formation of Echo-Words</s>
<s>in Kannada. In: All India Conference of Dravidian Linguistics(eds.) (1972) 5. Nongmeikapam, K.: Identification of Reduplication MWEs in Manipuri, a rule-based ap-proach. In: 23rd International Conference on the Computer Processing of Oriental Lan-guages, California, USA, pp. 49–54 (2010) 6. Chattopadhyay, S.K.: Bhasa-Prakash Bangala Vyakaran, 3rd edn. Pupa publication (1992) 7. Chaudhuri, B.B.: Bangla Dhwanipratik: Swarup o Abhidhan (Bangla Sound Symbolism: Properties and Dictionary). Paschimbanga Bangla Academy, Kolkata (2010) 8. Thompson, H.R.: Bengali: A Comprehensive Grammar, pp. 663–672. Routledge publica-tion (2010) 9. Bandyopadhyay, S.: Identification of Reduplication in Bengali Corpus and their Semantic Analysis: A Rule-Based Approach. In: Proceedings of the Workshop on Multiword Ex-pressions: from Theory to Applications (MWE 2010), Beijing, pp. 72–75 (2010) 466 A. Senapati and U. Garain 10. Senapati, A., Garain, U.: Anaphora Resolution in Bangla using global discourse knowledge. In: Int. Conf. of Asian Language Processing, Hanoi, Vietnam (2012) 11. Sharon, L.L.: Sampling: Design and Analysis, 2nd edn. Advanced Series, pp. 73–101 (2010) 12. TDIL Corpus: A nation-wide consortium for machine translation of Indic languages is be-ing funded by the Ministry of Information Technology, Govt. of India (1995), http://www.tdil-dc.in 13. Digital Dictionaries of South Asia, http://dsal.uchicago.edu/dictionaries/biswas-bangala/ A Computational Approach for Corpus Based Analysis of Reduplicated Words in Bengali 1 Introduction 2 Types of Reduplicated Words in Bengali 3 Existing Work on Reduplicated Word in Bengali 4 Our Contribution 5 Computational Approach to Identify Reduplication in Bengali 6 Corpus Based Study of Reduplicated Words in Bengali 7 Tuning Technique 8 Evaluation 9 Error Analysis 10 Conclusion References</s>
<s>12046_2019_1149_44_7 1..13Word Sense Disambiguation in Bengali language using unsupervisedmethodology with modificationsALOK RANJAN PAL1,* and DIGANTA SAHA21Department of Computer Science and Engineering, College of Engineering and Management, Kolaghat, India2Department of Computer Science and Engineering, Jadavpur University, Kolkata, Indiae-mail: chhaandasik@gmail.com; neruda0101@yahoo.comMS received 21 March 2017; revised 1 June 2018; accepted 14 May 2019; published online 27 June 2019Abstract. In this work, Word Sense Disambiguation (WSD) in Bengali language is implemented usingunsupervised methodology. In the first phase of this experiment, sentence clustering is performed using Max-imum Entropy method and the clusters are labelled with their innate senses by manual intervention, as thesesense-tagged clusters could be used as sense inventories for further experiment. In the next phase, when a testdata comes to be disambiguated, the Cosine Similarity Measure is used to find the closeness of that test data withthe initially sense-tagged clusters. The minimum distance of that test data from a particular sense-tagged clusterassigns the same sense to the test data as that of the cluster it is assigned with. This strategy is considered as thebaseline strategy, which produces 35% accurate result in WSD task. Next, two extensions are adopted over thisbaseline strategy: (a) Principal Component Analysis (PCA) over the feature vector, which produces 52%accuracy in WSD task and (b) Context Expansion of the sentences using Bengali WordNet coupled with PCA,which produces 61% accuracy in WSD task. The data sets that are used in this work are obtained from theBengali corpus, developed under the Technology Development for the Indian Languages (TDIL) project of theGovernment of India, and the lexical knowledge base (i.e., the Bengali WordNet) used in the work is developedat the Indian Statistical Institute, Kolkata, under the Indradhanush Project of the DeitY, Government of India.The challenges and the pitfalls of this work are also described in detail in the pre-conclusion section.Keywords. Natural language processing; word sense disambiguation; principal component analysis; contextexpansion.1. IntroductionWord Sense Disambiguation (WSD) [1, 2] is one of themajor tasks in the field of Natural Language Processing(NLP). There are so many words in every language thatcarry different senses in different contexts. For example,the word ‘‘Bank’’ has different meanings in different con-texts, such as ‘‘Financial institution’’, ‘‘River side sloppyland’’, ‘‘Reservoir’’ etc. These words are called ambiguouswords. Human brain has some inborn capability to distin-guish these senses. However, an automated system dependson some sets of rules for the sense finding. There are threemajor strategies used in this domain: (a) supervisedmethodology, (b) knowledge-based methodology and(c) unsupervised methodology.In supervised methodologies [3–5], a few previouslycreated training sets are used for learning purpose. When atest data comes for sense evaluation, these training sets areused by the system for sense finding.The knowledge-based methodologies [6, 7] use onlinedictionaries or thesauri as a sense inventory. The mostlyused online semantic dictionary is WordNet.The unsupervised methodologies [8–12] do not classifythe instances; rather, they cluster the instances. Thismethodology consists of two sub-tasks. First, the sentencesare clustered using any clustering algorithm, and theseclusters are labelled with their innate senses by manualintervention, as they can be used as sense inventories forfurther experiment. Next, any distance-based</s>
<s>similaritymeasuring technique is used to find the similarity of a newtest data with these sense-tagged inventories. The minimumdistance between a test data and a sense-tagged inventoryrepresents the sense of that test data.In this work, WSD is implemented in the following way:first, sentence clustering is performed using MaximumEntropy (ME) method. The sentence clusters are taggedwith relevant senses by manual intervention. Next, CosineSimilarity Measure is used as a distance-based similaritymeasuring technique. Using this baseline model, the accu-racy of WSD achieved is around 35%.*For correspondenceSådhanå (2019) 44:168 � Indian Academy of Scienceshttps://doi.org/10.1007/s12046-019-1149-2Sadhana(0123456789().,-volV)FT3](0123456789().,-volV)In this work, two extensions are adopted over the base-line model: (a) Principal Component Analysis (PCA) on thefeature vector, which produces 52% accurate result in WSDtask, and (b) Context Expansion of the sentences usingBengali WordNet followed by PCA, which produces 61%accuracy in WSD task.The data sets that are used in this work are obtained fromthe Bengali corpus, developed under the TechnologyDevelopment for the Indian Languages (TDIL) project ofthe Government of India, and the lexical knowledge base(i.e., the Bengali WordNet) used in this work for ContextExpansion is developed at the Indian Statistical Institute,Kolkata, under the Indradhanush Project of the DeitY,Government of India. The challenges and the pitfalls of thiswork are described in the end of this report.2. SurveyA brief survey in the field of WSD is presented in thissection. First, the state-of-the-art performance is presented;next the works in WSD in Asian languages followed by theIndian languages are described.2.1 State-of-the-art performanceThe performances of the state-of-the-art WSD systems arepresented here. First, the WSD systems were developedusing homographs. The accuracy of performance of thosesystems was above 95% based on very little input knowl-edge. For example, in 1995, Yarowsky proposed a semi-supervised approach and evaluated on 12 words (96.5%). In2001, Stevenson and Wilks used a POS-tagged data andother knowledge sources on all words using LongmanDictionary of Contemporary English. Their proposed sys-tem achieved an accuracy of 94.7%.In Senseval-1 (Kilgarriff and Palmer, 2000) evaluationexercise, the best accuracy achieved is 77% on the Englishlexical sample task, which is just below the level of humanperformance (80%), estimated by inter-tagger agreement;however, human replicability is estimated at 95%. In 2001,the scores in Senseval-2 (Edmonds and Cotton, 2001) werelower as the task was more difficult, because it was basedon the finer grained senses of WordNet. The best accuracyon the English lexical sample task in Senseval-2 was 64%(to an inter-tagger agreement of 86%). Before Senseval-2exercise, there was a debate on whether the knowledge-based approach was better or the machine-learning-basedapproach. However, Senseval-2 showed that supervisedapproaches had the best performance. The performance ofthe unsupervised system on the English lexical sample taskis 40%, which is below the most frequent-sense baseline of48%, but better than the random baseline of 16%.In 2004, in Senseval-3 (Mihalcea and Edmonds, 2004)evaluation exercise, the top systems on the English lexicalsample task performed at human level according to inter-tagger agreement. The 10 top systems (all supervised)performed 71.8–72.9% correct disambiguation comparedwith an inter-tagger agreement of 67%. The best unsuper-vised system overcame the most-frequent-sense baseline,achieving 66% accuracy. The score on the all-word taskwas lower than in Senseval-2,</s>
<s>probably because of the moredifficult text. Senseval-3 also brought the complete domi-nation of supervised approaches over the pure knowledge-based approaches.2.2 WSD in Asian languages, as well as in IndianlanguagesVarious works in WSD are implemented in English andother European languages, but very few works are estab-lished in Indian languages due to large varieties in mor-phological inflections and lack of different senseinventories, machine-readable dictionaries and knowledgeresources, etc., which are required by the WSD algorithms.The works in different Asian as well as in Indian languagesare described in the next sections.2.2a Manipuri: Richard Singh and K. Ghosh proposed analgorithm for Manipuri language in 2013 [13]. In this work,a 5-gram window is formed using the target word and itscontext words to form the context information. From thiscontextual information, the actual sense of the focusedword is disambiguated. In the work, positional feature isused because of the lack of other relevant morphologicalfeatures.2.2b Malayalam: Haroon [14] made the first attempt for anautomatic WSD in Malayalam language. The author usedthe knowledge-based approach. One approach is based on ahand-devised knowledge source and the other is based onthe conceptual density using Malayalam WordNet. In thefirst approach the author used the Lesk and Walker algo-rithm and in the second method, he used the conceptualdensity-based algorithm, where the semantic relatednessbetween the words is considered. The semantic relatednessbetween the words is calculated based on the path, depthand information content of the words in the WordNet.2.2c Punjabi: Kumar and Khanna [15] have proposed aWSD algorithm for resolving the ambiguity of anambiguous word from a text document in Punjabi language.The authors used a modified Lesk algorithm for WSD. Twohypotheses were considered in this approach. The first oneis based on the words that appear together in a sentence,and the final sense is assigned to the target word that isclosest according to the neighbouring words. The secondapproach is based on the related senses, which are identifiedby finding the overlapping words in their definitions.2.2d Assamese: Sarmah and Sarma [16] have proposed asupervised WSD system based on decision tree. The system168 Page 2 of 13 Sådhanå (2019) 44:168consists of four modules: (a) preprocessing of raw data,(b) sense inventory preparation, (c) feature selection and(d) constructing the decision tree. The algorithm producedan average F-measure of 0.611 in the 10-fold cross-vali-dation evaluation strategy when this was tested on 10Assamese ambiguous words.Kalita and Barman [17] have proposed a WSD system todisambiguate the Assamese nouns and adjectives. Theauthors propose a model based on Walker algorithm, whichuses the subject category or domain to determine the actualsense of the words. The system produced the precision andrecall of 86.66 and 61.09, respectively, on random sen-tences collected from the internet.2.2e Hindi: Pushpak Bhattacharyya and group [18] haveproposed the first WSD system for Hindi nouns using theWordNet. Accuracy of the system ranges from 40% to 70%for various documents like Agriculture, Science, Sociology,etc.Vishwarkarma and Vishwarkarma [19] have proposed agraph-based algorithm for WSD in Hindi. The authors useda graph-based model based on the similarities among wordsenses. The authors claim an accuracy of 65.17% in WSDtask.Satyendar Singh and group have proposed a WSD</s>
<s>sys-tem based on the Leacock–Chodorow semantic relatednessmeasure [20]. The algorithm is tested on the data set con-sisting of 20 Hindi polysemous nouns, obtaining the aver-age precision and recall of 60.65% and 57.11%,respectively.Yadav and Vishwarkarma [21] have proposed a WSDsystem for Hindi nouns based on mining association rules.The authors claim an average precision of 72% in sensefinding.Gaurav Tomar and group have proposed a WSD systembased on word clusters, obtained using Probabilistic LatentSemantic Analysis (PLSA) [22]. The authors tested thismethod on English and Hindi data sets and achieved theaccuracies of 83% and 74%, respectively.Kumari and Singh [23] have proposed an algorithm forWSD for Hindi nouns using genetic algorithm (GA). Theauthors applied this algorithm on a list of 12 nouns andachieved a recall of 91.6%.Devendra K Tayal and group have proposed an approachbased on Hyperspace Analogue to Language (HAL) todisambiguate the polysemous Hindi words [24]. Theauthors claim an accuracy of 79.16% in WSD task for theHindi words.2.2f Nepali: Roy et al [25] have proposed a semantic graph-based algorithm for WSD in Nepali language. This algo-rithm combines the lexical overlap and conceptual dis-tance-based strategy. The authors carried out theexperiment on a data set of 912 nouns and 751 adjectives.The overlap-based approach produced an accuracy forWSD of 54% for nouns and 42% for adjectives, and theconceptual distance-based method produced an accuracyfor WSD of 62% for nouns and 58% for adjectives.2.2g Myanmar: Aung et al [26] have proposed a WSDsystem based on Naive Bayes classification to disambiguatethe nouns and verbs in Myanmar language. The system wasevaluated on 60 ambiguous nouns and 100 ambiguousverbs, which produced 89% precision, 92% recall and 90%F score in WSD task.2.2h Arabic: A. Zouaghi and group proposed an algorithmfor WSD in Arabic language using Lesk algorithm. Theauthors claim a precision of 59% using the traditional Leskalgorithm, whereas the modified Lesk algorithm produced aprecision of 67% in WSD task.Mohamed M El-Gamml and M Waleed Fakhr proposed aWSD system for Arabic language using the Support VectorMachine (SVM) classifier following the Levenshetin Dis-tance measuring algorithm to determine the similaritybetween the words. They compared the performance oftheir proposed model with the supervised and unsupervisedalgorithms like Naive Bayes classifier and Latent SemanticAnalysis with K-mean clustering.Merhben et al [27] have proposed a hybrid approach forWSD in Arabic language. In this experiment, the authorsuse Latent Semantic Analysis, Harman, Croft and Okapimethods for information retrieval, and finally, the Leskalgorithm is developed for sense disambiguation. In thisexperiment, the test instances are collected from the webcorpus. The authors conduct the experiment on the data setcontaining 10 ambiguous words, and the accuracy in WSDtask is claimed as 73%. Bouhriz et al [28] proposed a semi-supervised method for Arabic WSD. The authors claim theprecision of 83% in WSD task.Merhbene et al [29] have proposed an algorithm forWSD in Arabic language based on two context information.First, the information that is extracted from the local con-text of the word to be disambiguated and second, the globalcontext that is extracted from the full text. The authorsclaim a precision of 74% in WSD task.2.2i Bengali: Although several works</s>
<s>on WSD in Bengalilanguage are in progress at different research organizationsin India and Bangladesh, only a few of them are availablein the web.Das and Sarkar [30] have presented one WSD system forBengali to get correct lexical choice for Bengali–Hindimachine translation. The authors used an unsupervisedgraph-based method to find the sense clusters. Followingthis strategy, the authors used a vector-space-basedapproach to map the sense clusters to the Hindi translationof the target word and thus the actual sense of anambiguous word was predicted from this mappingoperation.Pandit and Naskar [31] have proposed a memory-basedapproach for WSD in Bengali using k-NN method with anaccuracy of 71%.Sådhanå (2019) 44:168 Page 3 of 13 168Afsana Haque and Mohammed Moshiul Hoque proposeda dictionary-based approach for sense disambiguation ofBengali noun, adjective and verbs. The authors claim thatthe proposed system can disambiguate the ambiguousBengali words with 82.40% accuracy for some selectedsentences.Nazah et al [32] have proposed an algorithm for WSD inBengali language based on Naive Bayes classifier andArtificial Neural Network (ANN). The authors claim anaccuracy of 82% in WSD task for some selective Bengaliambiguous words.3. Proposed unsupervised approach for WSDUnsupervised approaches perform the WSD task throughtwo sub-tasks: first, sentence clustering, which is used forgrouping the sentences into several clusters. Next, theseclusters are tagged with relevant senses according to theirinnate senses and finally, the similarity measuring, which isused for finding the minimum distance between a test dataand the sense-tagged clusters. The minimum distance of atest data with a particular sense-tagged cluster assigns thesame sense to the test data as that of the cluster it isassigned with.In this work, sentence clustering is performed using MEmethod and the similarity is measured using Cosine Simi-larity Measuring technique.3.1 Flowchart of the baseline methodThe flowchart in figure 1 depicts the baseline strategydeveloped in this work.3.2 Text normalizationText normalization is the task of converting a real life textinto a uniform representation. The texts retrieved from theTDIL Bengali corpus1 are not normalized properly. Hence,a set of text normalization steps are executed to transformthe texts into machine-readable form. The steps include(a) removal of different punctuation symbols, multiplespaces and new lines, (b) conversion of all the fonts into asingle unicode-compatible font (‘‘Vrinda’’ is usedthroughout the work), (c) taking into account the differentsentence termination symbols (especially the ‘‘dāri’’ sym-bol, used in Bengali language), etc.Figures 2 and 3 present a sample non-normalized textand a normalized text, respectively.3.3 Text lemmatizationTo increase the lexical coverage of the data sets, all thetexts have been lemmatized before this work. In thisexperiment, the texts are lemmatized using a Bengalilemmatizer tool, developed in a project at CSE Department,Jadavpur University, Kolkata. A sample data afterlemmatization task is presented in figure 4.In our earlier experiment [33], it’s observed that as theBengali words are morphologically very much complex,the percentage of accuracy in WSD increases from 80% to85% on the same data set when it runs in a non-lemmatizedenvironment and lemmatized environment, respectively.Hence, in this work, all the experiments are carried out inlemmatized environment.3.4 Function word selectionIn Bengali language, there is no specific distinctionbetween the function word and content word; rather itdepends</s>
<s>on the nature of the experiment. According toFigure 1. Flowchart of the baseline procedure.1The TDIL Bengali corpus is obtained from the Linguistic ResearchUnit Department, ISI, Kolkata.168 Page 4 of 13 Sådhanå (2019) 44:168theoretical linguistics, all the Bengali words carry relevantsenses in specific cases. However, in computational lin-guistics, to keep the size of the data set within a manage-able length, a few less informative words are considered asfunction words. In this work, the Bengali words, exceptnoun, verb, adjective and adverb (adverbs are also called asa type of adjective in Bengali), are identified and discardedas function words.3.5 Feature selectionFeature selection task performs an important role in clus-tering operation.In this experiment, during the clustering task, initially allthe distinct words (vocabulary) present in the text wereconsidered as the features for clustering operation. Thus,the length of the feature vector became around 2000–3500according to the length of the data sets. Unfortunately, mysystem [Processor: Intel(R) Core(TM) i7-4510U CPU @2.00 GHz 2.60 GHz; RAM: 8.00 GB; System type: 64-bitOS] could not handle this length of the feature vector forclustering operation because the size of the feature space(number of sentences v/s feature-vector array length)became too large to handle. For example, when 500 sen-tences have to be clustered w.r.t. the feature vector oflength 3500, the size of the array is [500 9 3500]. Duringthe mathematical calculations on this extremely large array,my system failed to perform the clustering task.To resolve this problem, term frequency (TF) of eachdistinct word in the text is calculated and the features arearranged in decreasing order w.r.t. their TFs. After this, toprepare the feature vector within a manageable length,pruning is applied on the feature vector from the bottom ofthe list. As a result, the less occurring features are removedfrom the list and the length of the feature vector becomesshorter. During the experiment, the features having TF of 1,2, 3 and 4 are gradually pruned and the length of the featurevector becomes gradually shorter. Finally, after removingthe features having TF up to 4, the length of the featurevector becomes around 120–160, which is manageable bythe system. Thus, the threshold of pruning is set to 4.3.6 Selection of ambiguous wordAs stated earlier, according to theoretical linguistics, all theBengali words carry multiple senses based on the contexts.A standard Bengali lexical dictionary cites the major sensesof the words, but the available Bengali machine-readabledictionary (WordNet) covers a smaller subset of that sensedomain. Moreover, the TDIL Bengali corpus contains thecommonly used ambiguous words with their mostly usedsenses.Hence, the system has to follow an effective methodol-ogy to select the ambiguous words for the experiment.Using a separate program, it is calculated that the TDILBengali text corpus contains totally 3589220 words ininflected and non-inflected forms; among them, 199245numbers of words are distinct in nature (vocabulary). Usinga separate program, the TF of each distinct word is alsoFigure 2. Partial view of a sample non-normalized text.Figure 3. Partial view of a sample normalized text.Figure 4. A sample text after lemmatization.Sådhanå (2019) 44:168 Page 5 of 13 168calculated. Figure 5 presents the top most occurrences ofthe words in the</s>
<s>corpus.Although theoretically almost every Bengali word carriesmultiple senses in different contexts [34, 35], in computa-tional field only those words are considered for experimentthat are present in the corpus with some needful numbers ofoccurrences.3.7 Selection of senses for the ambiguous wordsfor evaluationIn reality, most of the senses of the Bengali words arecovered by the Bengali lexical dictionary, whereas theonline semantic dictionary (Bengali WordNet) contains asubset of the overall senses. Also, the corpus contains onlythose senses for an ambiguous word that are commonlyused in different contexts. In the experiment, only thosesenses are taken into account that are present in the corpusin some needful number of sentences. The threshold valuefor the number of sentences carrying a particular sense isset as 20 (as at least 10 sentences could be used for learningpurpose and remaining 10 for test purpose). Algorithm-1presents the sense selection procedure.3.8 Result and corresponding evaluationFirst of all, the total instance sentences are clustered indi-vidually, using almost every clustering algorithm availablein the weka-3-6-13 tool. However, the clustering results arenot acceptable in all the cases. For example, the simple K-mean clustering algorithm available in this tool nearly failsto cluster the instances. Some of the other algorithms couldnot cluster the instances according to the given (pre-de-fined) number of clusters; a few of them produced evenempty clusters.The ME method performed the clustering task up to acertain level of expectation. In this method, the number ofdesired clusters is predefined, which is the same as thenumber of different senses considered for evaluation (seetable 1). Next, the derived sentence clusters are labelledwith their innate senses by manual intervention.In the next phase, Cosine Similarity Measure is used as adistance-based similarity measuring technique to find thecloseness of a test data with all the sense-tagged clusters.The minimum distance of a test data with a sense-taggedcluster assigns the same sense to the test data as that of thecluster it is labelled with.During the experiment, approximately half of the datawere used for preparation of knowledge base throughFigure 5. The top most occurrences of the words in the corpus.168 Page 6 of 13 Sådhanå (2019) 44:168clustering and sense tagging, and remaining data were usedfor testing purpose.The accuracy of the result is evaluated programmaticallyby comparing the sense-derived test data to an ideal result,prepared earlier by the help of a standard Bengali lexicaldictionary (Sansad Bānglā Abhidhān).This baseline model is tested on 7 mostly used Bengaliambiguous words, and the accuracy in WSD task achievedis 35%.The final result of WSD is presented in the form ofpercentage of accuracy’’ instead of precision, recall and F-measure, because the system labels a sense tag to each andevery sentence either correctly or wrongly.4. Extensions on the baseline methodologyTwo measures are adopted for betterment of accuracy:(a) PCA on the feature vector and(b) Context Expansion of the sentences using the BengaliWordNet4.1 PCA on the feature vectorPCA is used in this model to filter the principal componentsamong the features. In the first phase of execution, theentire vocabulary was considered as the feature vector.However, unfortunately the length of the feature vector(according to the size of the data</s>
<s>sets, this length variesfrom 2000 to 3500 approximately) was out of the compu-tational power of the available system (system specificationis mentioned in section 3.5). Hence, reduction of length ofthe feature vector, while preserving the principalcomponents, became an obvious issue. To deal with thisproblem, PCA is incorporated within this model. Duringexecution of the system, the PCA module available inweka-3-6-13 tool is used to sort out the principal compo-nents from the feature vector. This tool selects the principalcomponents and eliminates the least important featuresfrom the feature vector with the help of Ranker’s algorithm,which is an in-built algorithm used for this task in this tool.Using this tool, the length of the feature vector was reducedto 120–160 approximately from its original size, which wasin thousand-scale initially.Result and corresponding evaluationNow, the same data sets, used in the baseline method-ology are used for sense evaluation using the reduced fea-ture vector and the overall accuracy of WSD is increased to52% (see table 2).The accuracy of the result is evaluated programmaticallyby comparing the sense derived test data to an ideal result,prepared earlier by the help of a standard Bengali lexicaldictionary (Sansad Bānglā Abhidhān).The final result of WSD is presented in the form ofpercentage of accuracy’’ instead of precision, recall and F-measure, because the system labels a sense tag to everysentence either correctly or wrongly.4.2 Context Expansion using the Bengali WordNetAlthough the accuracy of result is increased using PCA, itis not up to the level of expectation. The reason, observedthrough close observation, is the lack of lexical matchbetween the words of the sentences and the features in thefeature vector.Table 1. WSD result using the baseline approach.Sådhanå (2019) 44:168 Page 7 of 13 168To overcome this problem, the same strategy is consid-ered as in the knowledge-based approach, that is ContextExpansion of the sentences using Bengali WordNet. In thismethod, context of every sentence is expanded by handlingthe meaningful words of the sentences and their synony-mous words from the WordNet (see figure 6).The sizes of the synsets of different words present in theWordNet are different. Although many commonly usedBengali words are not present in this WordNet (see section5.6), still the system uses the available knowledge from thisdictionary for sense expansion. Sense definitions of asample word are given in table 3.The Bengali WordNetThe Bengali WordNet is an online semantic dictionary,used for obtaining the semantic information of the Bengaliwords (Dash 2012). It provides different information aboutthe Bengali words and also gives the relationship/relation-ships that exists/exist between the words. The BengaliWordNet is developed at the Indian Statistical Institute,Kolkata, under the Indradhanush Project of the DeitY,Government of India. In this WordNet, a user can search aBengali word and get its meaning. In addition, it gives thegrammatical category, namely noun, verb, adjective oradverb, of the word being searched. It is noted that a wordmay appear in more than one grammatical categories and aparticular grammatical category can have multiple senses.The WordNet also provides information for these cate-gories and all senses for the word being searched.Apart from the category for each sense, the following setof information for a Bengali</s>
<s>word is present in theWordNet:(a) meaning of the word,(b) example of use of the word,(c) synonyms (words with similar meanings),(d) part-of-speech,(e) ontology (hierarchical semantic representation) and(f) semantic and lexical relations.At present the Bengali WordNet contains 36534 wordscovering all major lexical categories, namely noun, verb,adjective and adverb.Result and corresponding evaluationNow, the same data sets used in previous two experi-ments are used in this phase and the overall accuracy isincreased to 61% (see table 4).The accuracy of the result is evaluated programmaticallyby comparing the sense derived test data to an ideal result,prepared earlier by the help of a standard Bengali lexicaldictionary (Sansad Bānglā Abhidhān).5. A few close observationsA lot of challenges appeared in every phase of theexperiments.5.1 Wide range of morphological inflectionsThe wide range of morphological inflections of Bengaliwords is a major factor in this experiment. This range is toolarge in real life data that it is quite impossible to track all theinflections computationally. For example, in English theword ‘‘eat’’ has only five morphological forms: ‘‘eat’’, ‘‘ate’’,‘‘eaten’’, ‘‘eating’’ and ‘‘eats’’. However, in Bengali lan-guage, this word has more than 150 morphological inflec-tions including calit (colloquial) and sādhu (chaste) form,such as (khāi), (khāo), (khās), (khāy),(khācchi), (khān), (khācchis), (khācchen),(khāccha), (khācche), (kheyechi), (khācchi),(kheyecha), etc.Table 2. WSD result using PCA.168 Page 8 of 13 Sådhanå (2019) 44:168Further, the nominal and adjectivalmorphology inBengali islighter comparedwith verbalmorphology. In general, nouns areinflected according to seven grammatical cases (nominative,accusative, instrumental, ablative, genitive, locative and voca-tive), two numbers (singular and plural), a few determiners like, , (-khānā) and (-khāni) and a fewemphatic markers like (-i) and (-o), etc. The adjectives,on the other hand, are normally inflected with some primaryand secondary adjectival suffixes denoting degree, quality,quantity and similar other attributes. As a result, to build up acomplete and robust system for WSD, considering all typesof morphologically derived forms with lexical informationand semantic relations, is a real challenge in Bengalilanguage.5.2 Vast semantic varietyThe vast semantic varieties of Bengali words are also a bigchallenge in this research field. Examples are the following.5.2a Same sense with no contextual similarity: A few sen-tences are encountered that carry similar sense but there isno similarity among contextual words. For example‘‘ (jekhāler kathā tomāke bolechi tāte ekfontāo jal nei.)’’‘‘ (eirashmi jeebānu nāshak er dwārā jal jeebānu mukta karāhay.)’’(ek anu gliseral tin anu stiyārik yāsid ek anu gliseraltrāistiyāret tin anu jal.)’’(rāmkrishnadeb balten āguner tāpe jal garam hoye futeuthle ālu patalgulo sab opar neec korte thāke choto chelerāvābe ālu patalgulo lāfācche.)’’ etc.Establishing semantic relation among these sentencesthrough the contextual words is a big challenge.5.2b Same contextual words with different senses: This isjust the opposite situation of the previous issue. There areseveral sentences that carry dissimilar meanings throughtheir similar contextual words. For example:‘‘ (sei yuge mānus chilayāyābar prakritir.)’’ and(vuparyatak kalam-bās chilen yāyābar prakritir mānus.)’’(seiyuge mānus guhār avyantare prāceer gātre dainandinshikārer hisāb o nihata jeebjantur sankhyā prastarkhandersāhāyye khodāi kariyā rākhita.)’’ and(orishār mayurvanj elākāy kichu mānus āchen ynārābanshaparamparāy pātharer murti khodāi kare bājāre bikrikore sansār cālān.)’’ etc.These sentences are composed of similar key words butthey carry different senses individually.5.2c Presence of contextual words in</s>
<s>a single sentencecarrying different senses: A few sentences are encounteredwhere multiple-sense-carrying keywords are present in asingle sentence to denote a single sense as a whole. Forexample:(pāndulipir dhoosar pātāy tnār ātmajeebanee ājo etotāijeebanta ye ekbār parte shuru karle cokher pātā prenā.)’’ Inthis sentence, while disambiguating the word ‘‘ pātā’’,the word ‘‘ pāndulipi’’ is a contextual word for thesense ‘‘Figure 6. Flowchart of the Context Expansion approach.Sådhanå (2019) 44:168 Page 9 of 13 168pristhā’’, and the word ‘‘ dhoosar’’’’ is a contextualword for the sense ‘‘ pristhā’’, as well as ‘‘gācher pātā’’, and ‘‘ : cokh’’ is a contextual word for thesense ‘‘ akshi pallab’’.5.2d Sentence with sense anomaly: A few sentences areencountered where it appears quite impossible to tag aparticular sense even by human judgment. For example:(se pareekshār cāridike eta sanyamer bestan ye sāmānyamānus teman upavog lāv karibār sahisnutāsanchay karitepāre nā.)’’ ‘‘ (tantra balese kathā gurumukh kariyā shunite hay.)’’5.3 Very large sentenceSome sentences are too large that they carry large amountof irrelevant information in it. For example: (keha bā duikāne ānul cāpiyā jhup jhup kariyā drutabege katakgulo dubpāriyā caliyā yāyita, keha bā dub nā diyā gāmchāy jal tuliyāghana ghana māthāy dhālite thākita, keha bā jaler uparivāger malinatā erāibār janya bārbār dui hāte jal kātāiyā lailaiyā hathāt eksamay dhnā kariyā dub pārita, keha bā uparersniri haitei binā voomikāy sashabde jaler madhye jhāmpdiyā pariyā ātmasamarpan karita, keha bā jaler madhyenāmite nāmite ek nishwāse katakguli shlok āorāiyā laita,keha bā bysta konomate snān sāriyā laiyā bāri jāibār janyautsuk, kāhāro bā bystatā leshmātra nāi dheere susthe snānkariyā jap kariyā gā muchiyā kāpar chāriyā knocātā duitinbār jhāriyā bāgān haite kichu bā ful tuliyā mridumandadodul gatite snānsnigdha shareerer ārāmtike bāyutebikeerna karate karate bārir dike tāhār yātrā.)’’5.4 Very short sentenceSome sentences are very short in length. As a result, thesystem could not retrieve sufficient information from them.For example:‘‘ (ne jal ān.)’’‘‘ (bāki raila ekmātra mānus.)’’5.5 Spelling errorIn a few cases, spelling errors in the words are the obstaclesduring execution of the system.Table 3. Sense definitions of a sample word from the existing Bengali WordNet.168 Page 10 of 13 Sådhanå (2019) 44:168The wrong use of ‘ sh’, ‘‘ s’’’, ‘‘ s’’’ ; ‘‘ i’’’‘‘ ee’’’; ‘‘ u’’, ‘‘ oo’’’; ‘‘ t’’’ and ‘‘ t’’’ anddifferent typographical mistakes in the words are themajor issues in this aspect. These errors can be managedeasily in a manual system; however, in an automatedsystem, these spelling errors directly affect the perfor-mance of the system.5.6 Scarcity of information in WordNetThe Bengali WordNet is in developing phase, so it is not acomplete reference for semantic information of the Bengaliwords.(a) The different sense definitions of the common Ben-gali ambiguous words are missing in this dictionary, suchas ‘‘ mānus (single sense available), ‘‘ parā’’ (ab-sent), etc., and a few ambiguous inflected forms such as‘‘ neece’’, ‘‘ dhare’’, ‘‘ fale’’, ‘‘ mane’’, etc.are also absent in this dictionary.b) Some sense definitions are found in the WordNet thatare absent in the standard lexical dictionary, as well asthose unknown to the linguistic experts (see table 5).c) Some common relations among the senses</s>
<s>of thewords are not established (properly/not at all) in this onlinedictionary, such as hypernymy, hyponymy, holonymy,meronymy, antonymy, etc.5.7 Usefulness of function words in BengaliHandling the function words and the content words inBengali is one of the toughest jobs. To bring the size of thedata sets to some manageable length, a few function wordsare removed from the data sets. As there is no fixedboundary between the function words and content words,the keywords with primary part-of-speech (noun, verb,adjective and adverb) are considered as content words.However, the diversity in senses of the Bengali words is toolarge that a few part-of-speeches like : indeclinable,: postposition, : auxiliary verb, etc. areused as the function words in a few cases, as well as contentwords in a few cases. For example, the words ‘‘ :haoyā’’, ‘‘ : karā’’, etc. are generally used as auxiliaryverbs, but when those words are used as a part of acompound word, they are used as content words, such asTable 4. WSD result using Context Expansion and PCA.Table 5. Unknown sense definitions in Bengali WordNet.Sådhanå (2019) 44:168 Page 11 of 13 168‘‘ : mānus karā’’, ‘‘ : hāt karā’’, etc. Theword ‘‘ : kāche’’ is used in different sentenceswith three different part-of-speeches, such as‘‘ noun: tār kāch theke enechi’’,‘‘ adverb: kāchepithetakhan lok chila nā’’, ‘‘ andindeclinable: bidyār kāche artha moolyaheen’’. Hence,handling the function word and content word in Bengalimight be a separate research work.6. Conclusion and future workIn this work, WSD in Bengali language is presented usingunsupervised methodology. First, the ME method is used asa baseline clustering algorithm. Next, two extensions, PCAover the feature vector and Context Expansion of the sen-tences using WordNet, are implemented, and Cosine Sim-ilarity Measure is used as a distance-based similaritymeasuring technique.Although, the accuracy of result is increased due to thesetwo extensions, one obvious obstacle still remains in thismethodology. As the size of the test instances is scaleddown to sentence level (smaller context) from documentlevel (larger context), two issues appear at the time ofclustering; first, the TF of a feature in a sentence becomesvery small, which plays an important role in the mathe-matical calculation in clustering task, and second, the intra-cluster relations among the features have not been estab-lished properly, which has a great impact on the accuracy ofoutput.Finally, through a close observation it is also noticedthat, although the collocating words of a keyword havemultiple meanings in the WordNet, associated with relatedsynsets, glosses and example sentences, they do not par-ticipate in lexical overlap as their sense domains aredifferent.References[1] Ide N and Véronis J 1998 Word sense disambiguation: thestate of the art. Computational Linguistics 24(1): 1–40[2] Navigli R 2009 Word sense disambiguation: a survey. ACMComputing Surveys 41(2): 1–69[3] Sanderson M 1994 Word sense disambiguation and infor-mation retrieval. In: Proceedings of the 17th Annual Inter-national ACM SIGIR Conference on Research andDevelopment in Information Retrieval, SIGIR’94, July03–06, Dublin, Ireland, Springer, New York, pp. 142–151[4] Mihalcea R and Moldovan D 2000 An iterative approach toword sense disambiguation. In: Proceedings of FLAIRS,Orlando, FL, pp. 219–223[5] Sanderson M 1994 Word sense disambiguation and infor-mation retrieval.</s>
<s>In: Proceedings of the 17th Annual Inter-national ACM SIGIR Conference on Research andDevelopment in Information Retrieval, SIGIR’94, Dublin,Ireland, pp. 142–151[6] Banerjee S and Pedersen T 2002 An adapted Lesk algorithmfor word sense disambiguation using WordNet. In: Pro-ceedings of the Third International Conference on Compu-tational Linguistics and Intelligent Text Processing,pp. 136–145[7] Lesk M 1986 Automatic sense disambiguation using machinereadable dictionaries: how to tell a pine cone from an icecream cone. In: Proceedings of SIGDOC ’86, the 5th AnnualInternational Conference on Systems Documentation, Tor-onto, Ontario, Canada, pp. 24–26[8] Seo H, Chung H, Rim H, Myaeng S H and Kim S 2004Unsupervised word sense disambiguation using WordNetrelatives. Computer Speech and Language 18(3): 253–273[9] Martin W T and Berlanga L R 2012 A clustering-basedapproach for unsupervised word sense disambiguation. In:Procesamiento del Lenguaje Natural, Revista no 49,pp 49–56[10] Heyan H, Zhizhuo Y and Ping J 2011 Unsupervised wordsense disambiguation using neighborhood knowledge. In:Proceedings of the 25th Pacific Asia Conference on Lan-guage, Information and Computation, pp. 333–342[11] Niu C, Li W, Srihari R K, Li H and Crist L 2004 Contextclustering for word sense disambiguation based on modelingpairwise context similarities. In: Proceedings of SENSEVAL-3, Third International Workshop on the Evaluation of Sys-tems for the Semantic Analysis of Text, Barcelona, Spain[12] Jurafsky D and Martin J H 2000 Speech and language pro-cessing. ISBN 81-7808-594-1, Pearson Education (Singa-pore) Pte. Ltd. Indian Branch, Delhi 110092, India[13] Singh R L, Ghosh K, Nongmeikapam K and BandyopadhyayS 2014 A decision tree based word sense disambiguationsystem in Manipuri language. Advanced Computing: AnInternational Journal 5(4): 17–22[14] Haroon R P 2010 Malayalam word sense disambiguation. In:Proceedings of the 2010 IEEE International Conference onComputational Intelligence and Computing Research(ICCIC)[15] Kumar R and Khanna R 2011 Natural language engineering:the study of word sense disambiguation in Punjabi. ResearchCell: An International Journal of Engineering Sciences 1:230–238[16] Sarmah J and Sarma S K 2016 Decision tree based wordsense disambiguation for Assamese. International Journal ofComputer Applications 141: 42–48[17] Kalita P and Barman A K 2015 Implementation of Walkeralgorithm in word sense disambiguation for Assamese lan-guage. In: Proceedings of the International Symposium onAdvanced Computing and Communication (ISACC),pp. 136–140[18] Shahid H and Preeti Y 2014 Study of Hindi word sensedisambiguation based on Hindi WorldNet. InternationalJournal for Research in Applied Science and EngineeringTechnology 2(5): 390–395[19] Vishwarkarma S and Vishwarkarma C 2012 A graph-basedapproach to word sense disambiguation for Hindi language.International Journal of Scientific Research Engineering &Technology 1(5): 313–318[20] Singh S 2013 Hindi word sense disambiguation usingsemantic relatedness measure. In: Proceedings of the168 Page 12 of 13 Sådhanå (2019) 44:168International Workshop on Multi-disciplinary Trends inArtificial Intelligence, pp. 247–256[21] Yadav P and Vishwarkarma S 2013 Mining association rulesbased approach to word sense disambiguation for Hindilanguage. International Journal of Emerging Technology andAdvanced Engineering 3(5): 470–473[22] Tomar G S et al 2013 Probabilistic latent semantic analysisfor unsupervised word sense disambiguation. InternationalJournal of Computer Science Issues 10(5): 127–133[23] Kumari S and Singh P 2013 Optimized word sense disam-biguation in Hindi using genetic algorithm. InternationalJournal of Research in Computer and CommunicationTechnology 2(7): 445–449[24] Tayal D K 2015 Word sense disambiguation</s>
<s>in Hindi lan-guage using hyperspace analogue to language and fuzzy-C means clustering. In: Proceedings of the InternationalConference on Natural Language Processing (ICON)[25] Roy A, Sarkar S and Purkayastha B S 2014 Knowledge basedapproaches to Nepali word sense disambiguation. Interna-tional Journal on Natural Language Computing 3(3): 51–63[26] Aung N T, Soe K M and Thein N L 2011 A word sensedisambiguation system using Naive Bayes algorithm forMyanmar language. International Journal of Scientific &Engineering Research 2(9): 1–7[27] Merhben L, Zouaghi A and Zrigui M 2010 AmbiguousArabic words disambiguation. In: Proceedings of the 11thACIS International Conference on Software Engineering,Artificial Intelligence, Networking and Parallel/DistributedComputing, pp. 157–164[28] Bouhriz N, Benabbou F and Lahmar E H B 2016 Word sensedisambiguation approach for Arabic text. InternationalJournal of Advanced Computer Science and Applications7(4): 381–385[29] Merhbene L, Zouaghi A and Zrigui M 2013 A semi-super-vised method for Arabic word sense disambiguation using aweighted directed graph. In: Proceedings of the InternationalJoint Conference on Natural Language Processing,pp. 1027–1031[30] Das A and Sarkar S 2013 Word sense disambiguation inBengali applied to Bengali–Hindi machine translation. In:Proceedings of the 10th International Conference on NaturalLanguage Processing (ICON), Noida, India[31] Pandit R and Naskar S K 2015 A memory based approach toword sense disambiguation in Bangla using k-NN method.In: Proceedings of the 2nd IEEE International Conference onRecent Trends in Information Systems (ReTIS),pp. 383–386[32] Nazah S, Hoque M M and Hossain R 2017 Word sensedisambiguation of Bangla sentences using statisticalapproach. In: Proceedings of the 3rd International Confer-ence on Electrical Information and Communication Tech-nology (EICT), pp. 1–6[33] Pal A R, Saha D, Naskar S and Dash N S 2015 Word sensedisambiguation in Bengali: a lemmatized system increasesthe accuracy of the result. In: Proceedings of the 2nd IEEEInternational Conference on Recent Trends in InformationSystems (ReTIS), pp. 342–346[34] Dash N S 1999 Corpus oriented Bangla language processing.Jadavpur Journal of Philosophy 11(1): 1–28[35] Dash N S and Chaudhuri B B 2001 A corpus based study ofthe Bangla language. Indian Journal of Linguistics 20:19–40Sådhanå (2019) 44:168 Page 13 of 13 168</s>
<s>Proceedings Template - WORDLabeling of Query Words using Conditional Random FieldSatanu Ghosh West Bengal University of Technology +91-7278137003 satanu.ghosh.94@gmail.comSouvick Ghosh Jadavpur University +91-9007728924 souvick.gh@gmail.com Dipankar Das Jadavpur University +91-9432226464 dipankar.dipnil2005@gmail.com ABSTRACT This paper describes our approach on Query Word Labeling as an attempt in the shared task on Mixed Script Information Retrieval at Forum for Information Retrieval Evaluation (FIRE) 2015. The query is written in Roman script and the words were in English or transliterated from Indian regional languages. A total of eight Indian languages were present in addition to English. We also identified the Named Entities and special symbols as part of our task. A CRF based machine learning framework was used for labeling the individual words with their corresponding language labels. We used a dictionary based approach for language identification. We also took into account the context of the word while identifying the language. Our system demonstrated an overall accuracy of 75.5% for token level language identification. The strict F-measure scores for the identification of token level language labels for Bengali, English and Hindi are 0.7486, 0.892 and 0.7972 respectively. The overall weighted F-measure of our system was 0.7498. CCS Concepts • Computing methodologies~Natural language processing • Computing methodologies~Information extraction Keywords Transliteration, Word level language identification, Code-switch 1. INTRODUCTION Language Identification is a necessary prerequisite for processing any user generated text, where the language is unknown. The identification of the language can be done at document level or at word level. While Language Identification was previously being considered as a solved problem, the recent proliferation of social media and various phenomena such as code-switching, code-mixing, lexical borrowings and phonetic typing have introduced a new dimension to the problem. Random contractions (‘‘em’ in place of ‘them’ or ‘shan’t’ in place of ‘shall not’) and transliterations have further complicated the problem of Language Identification. Various spelling variations, transliterations and non-adherence to formal grammar are also quite common in such text. [11, 14] Language Identification for documents is a well-studied natural language problem [2]. King and Abney [6] presented the different aspects of this problem and focused on the problem of labeling the language of individual words in a set of multilingual document. They proposed language identification at the word level in mixed language documents instead of sentence level identification. The last few decades have seen the development of transliteration systems for Asian languages. Some notable transliteration systems were built for Chinese [7], Japanese [4], Korean [5], Arabic [1], etc. Transliteration systems were also developed for Indian languages [3, 9]. 2. TASK DEFINITION A query q : < w1w2w3 ... wn > is written in Roman script. The words, w1,w2,w3, ... wn, could be standard English words or transliterated from Indian languages (L). The languages (L) can be Bengali (Bn), English (En), Gujarati (Gu), Hindi (Hi), Kannada (Ka), Malayalam (Ml), Marathi (Mr), Tamil (Ta) or Telugu (Te). The objective of the task is to identify the words as English or member of L depending on whether it is a standard English</s>
<s>word or a transliterated L-language word. The words of a single query usually come from 1 or 2 languages and very rarely from 3 languages. In case of mixed language queries, one of the languages is either English or Hindi. Thus, queries are formed by mixing Tamil and English words, or Bengali and Hindi words, but not for example, Gujarati and Kannada words. We were also required to identify the Named Entities as NE (e.g. Sachin Tendulkar, Kolkata, etc). 3. DATASET AND RESOURCES This section describes the dataset that have been used in this work. The training and the test data have been constructed using manual and automated techniques and made available to the task participants by the organizers. The training dataset consists of 2908 sentences whereas the test set contains 792 sentences. The following resources provided by the organizers were also employed:  English word frequency list1: It is in plain tab-separated text file containing English words collected from standard dictionary and followed by their frequencies computed from a large corpus. It contains noise (very low frequency entries) as it is constructed from news corpora.  Hindi word transliteration pairs 1 [10]: It is in plain tab-separated text file containing a total of 30,823 transliterated Hindi words (in Roman script) followed by the same word in Devanagari. It also contains Roman spelling variations for the same Hindi words (the transliteration pairs found using alignment of Bollywood song lyrics). However, it does not contain frequency or occurrence of a particular word transliteration pair.  Bangla word frequency list2: It is in plain tab-separated text format. It contains Bengali words (Roman 1 http://cse.iitkgp.ac.in/resgrp/cnerg/qa/fire13translit/index.html 2 http://cse.iitkgp.ac.in/resgrp/cnerg/qa/fire13translit/index.html script, ITRANS format) followed by their frequency computed from a large Anandabazar Patrika news corpus. ITRANS to UTF-8 converter is used for obtaining the words in Bengali script.  Gujarati word transliteration pairs2: It is in plain tab-separated text format. It contains transliterated Gujarati words (Roman script) followed by the same word in Gujarati script. Due to the poor availability of Gujarati resources, a small list of 546 entries was created from training the data of FIRE shared task.  Google Input Tools3: We used the lookup table of transliterated word pairs provided in Google Input Tools. These contain transliterated pairs of native Indian languages to Roman Script. We used these tables for all 8 Indian languages to create word list for each language.  Corncob Web Dictionary4: The dictionary contains 58110 distinct English words. We have used it to identify English words.  Stanford NE Tagger5: Named Entity Recognition (NER) labels sequences of words in a text which are the names of things, such as person and company names, or gene and protein names, etc. We also developed 11 lists of our own which are as follows:  Named Entity List: We developed this named entity list using the training data. It contains 648 distinct names.  Emoticon List: We developed this list using Wikipedia. This list contains 273 distinct emoticons.  Language Wordlist: We</s>
<s>developed nine wordlists for nine different languages using training data. The wordlists contained few overlapping words. 4. SYSTEM DESCRIPTION Our primary task was word-level language classification. However, identification of Named Entities was also necessary. 4.1 Word-level Language Identification Features The following features were used for language identification: 4.1.1 Capitalization Three types of Boolean capitalization features are used for encoding capitalization information. As all the words are in Roman script we use the ASCII value to identify a capital character. The first feature is whether the first character of the word is capital or not. This is an important feature as this is later used for identification of Named Entity. The second feature is whether the whole word is capital or not. The third feature is if any character in the word is capital or not. For example, words like Mumbai, BCSE, 3G, etc. CAP1: Is first letter capitalized? If yes, then CAP1 = 1, else 0 CAP2: Is any character capitalized? If yes, then CAP2=1, else 0 CAP3: Are all characters capitalized? If yes, then CAP3=1, else 0 3 https://www.google.com/inputtools/ 4 http://www.mieliestronk.com/wordlist.html 5 http://nlp.stanford.edu/software/CRF-NER.shtml 4.1.2 Word-level Context The previous three words and the next three words along with the current token and length of the current token is used as contextual feature. As language identification and points of code-switch are context sensitive [12, 18, 19] we have used this feature only for classification. This feature is very much crucial to resolve the ambiguity in the word-level language identification problem. Let us consider examples given below:  Mama take this badge off of me.  Ami take boli je ami bansdronir kichu agei thaki. The word `take' exists in the English vocabulary. However, the backward transliteration of `take' is a valid Bengali word. Words like `take', `are', `pore', and `bad' are truly ambiguous words with respect to the word-level language identification problem as they are valid English words as well as their backward transliterations are valid Bengali words. In this regard, context of the word can be used to correctly identify the language for such an ambiguous word. The dynamic unigram feature in the CRF++ template file analyses the previous token and the next token for their language and the language of the current token is annotated according to the context. Therefore, we have considered it as a very useful feature. CON1: Current token CON2: Previous 3 and next 3 tokens CON3: Length of the current token. This feature is important because words in Indian languages tend to be longer than words in English. 4.1.3 Special Character A word might start with some symbol, e.g. #, @, etc. These boolean features indicate the presence of hashtag (#), at the rate (@), hyperlink and emoticons. A list of emoticons containing 273 distinct emoticons using different kind of special characters was made and used for identification of emoticons. For example, @aapyogendra, #aapsweep, http://t.co/pym4cr6xx0, CHR1: If the word starts with #? If yes, then 1 else 0 CHR2: If the word starts with @?</s>
<s>If yes, then 1 else 0 CHR3: If the word starts with http? If yes, then 1 else 0 CHR4: If emoticon? If yes, then 1 else 0 4.1.4 Dictionary Feature A total of 9 different languages were there to be identified. We used 9 different lexical resources, one for each language. We used 9 different Boolean features to represent if a particular token is present in a particular lexicon. If a particular word is present in more than one lexicon, we use a unigram relational feature in the template file of CRF++ to handle the ambiguity. This unigram relational feature is determined using two or more other features. For example, U1: %x[0,20]/%x[0,21] LEX1: Is present in English dictionary? If yes, then 1, else 0 LEX2, LEX3,,…, LEX9 for other languages. 4.1.5 Presence of Symbol in word Only one Boolean feature is used to identify the words with punctuation marks present in it. The punctuation marks can be an apostrophe ('), a dash (-), etc. For example, goalkeepers\, angul-er CHR5: Is symbol present? If yes, then 1 else 0 4.1.6 Presence of Digit This Boolean function is used to indicate if a word contains a digit. As the corpus provided contains social media text, this feature was used. In phonetic script people often use digit to shorten their text. For example ‘gr8’ in place of ‘great’, ‘4nds’ for ‘friends’ CHR6: Is digit present? If yes, then 1 else 0 4.1.7 Number Identification This Boolean feature is used to identify if the token is number or not. For example, number like 30, 67, etc. CHR7: Is token a number? If yes, then 1 else 0 4.1.8 Named Entity Identification For NE identification we use the Stanford NE Tagger6 along with a lexicon of named entities. We use two Boolean features for this purpose. The first is the basic lexicon search and the second is for the Stanford NE Tagger. We use another unigram relational feature in CRF++ for classification of NE Tags. The basic lexicon is the Named Entity list which we developed for our task. NE1: If name entity matches List1, then NE1 = 1, else 0 NE2: If name entity matches List2, then NE2 = 1, else 0 5. RESULTS In this work, Conditional Random Field (CRF) [13] has been used to build the framework for word-level language identification classifier. We have used CRF++ toolkit7 which is a simple, customizable, and open source implementation of CRF. The accuracies with respect to nine different languages as well as average and weighted F-measures are shown in Table 1 and Table Table 1: Tokens level Results for language identification Language Precision Recall F-Measure X 0.9423 0.7525 0.8367 Bengali 0.8129 0.6937 0.7486 English 0.9318 0.8555 0.892 Gujarati 0.0757 0.4118 0.1279 Hindi 0.7772 0.8182 0.7972 Kannada 0.2793 0.799 0.4139 Malayalam 0.2597 0.6522 0.3715 Marathi 0.4956 0.8687 0.6311 Tamil 0.5672 0.817 0.6696 Telegu 0.3874 0.8153 0.5252 Table 2: Other performance metrics Tokens Accuracy (in %) 75.4896 Utterances Accuracy (in %) 21.5909 Average F-Measure 0.538392 Weighted</s>
<s>F-Measure 0.749833 Table 3: Confusion matrix between languages en X hi bn ml mr kn te gu ta 6 http://nlp.stanford.edu/software/CRF-NER.shtml 7 http://crfpp.googlecode.com/svn/trunk/doc/index.html 72 79 37 47 1 2 1 16 1 6 X 32 63 2 1 0 0 0 1 0 0 1 84 42 38 0 6 3 6 9 0 bn 84 71 50 12 0 7 2 4 9 8 ml 19 38 2 13 60 1 12 0 0 13 mr 23 33 53 65 2 5 3 2 1 1 kn 59 93 8 9 2 2 7 10 0 19 te 54 50 22 2 5 9 5 3 0 6 gu 18 13 77 39 0 3 6 0 14 9 ta 33 74 3 4 20 0 5 0 0 6. ERROR ANALYSIS If we look at the confusion matrix for different languages, we can notice that many other languages have been wrongly classified as English. This is primarily due to overlapping words between English and all other Indian languages. In our task, the accuracies of MIXes and NEs were quite low. The primary reason for the increased error rate in MIX determination was the absence of post processing measures to identify the mixed words. Also the sub-classification errors in NE recognition could have been significantly reduced by adding a NE-classification module to our system. Our accuracy also declined for Gujarati, Kannada and Malayalam. Use of larger wordlists and transliterated dictionary should have improved the scores. 7. CONCLUSION In this paper, we presented a brief overview of our system to address the automatic identification of word-level language. While the CRF-based approach was satisfactory, the results could have been improved by including post-processing heuristics for identifying mixed words and named entities. Using more character level features should improve the accuracy of the system. Also some basic knowledge about other languages and better wordlists and dictionary for regional languages should improve the accuracy of the present system. We used character n-grams (n=1 to 5) as one of the features of CRF++. However, the performance of the system declined on incorporating it. 8. ACKNOWLEDGMENTS Our thanks to the organizers of FIRE 2015 shared task. Royal Sequira of Microsoft Research was very helpful throughout the work. We would also like to thank Nagesh Bhattu who corrected the annotations of numeric entities in the training data. 9. REFERENCES [1] Y. Al-Onaizan and K. Knight. Named entity translation: Extended abstract. In HLT, pages 122-124. Singapore, 2002. [2] K. R. Beesley. Language identifier: A computer program for automatic natural-language identification of on-line text. In ATA, pages 47-54, 1988. [3] A. Ekbal, S. Naskar, and S. Bandyopadhyay. A modified joint source channel model for transliteration. In COLING-ACL, pages 191-198. Australia, 2006. [4] I. Goto, N. Kato, N. Uratani, and T. Ehara. Transliteration considering context information based on the maximum entropy method. In MT-Summit IX, pages 125-132. New Orleans, USA, 2003. [5] S. Y. Jung, S. L. Hong, and E. Paek. An english to korean transliteration model of extended markov</s>
<s>window. In COLING, pages 383-389, 2000. [6] B. King and S. Abney. Labeling the languages of words in mixed-language documents using weakly supervised methods. In NAACL-HLT, pages 1110-1119, 2013. [7] H. Li, Z. Min, and J. Su. A joint source-channel model for machine transliteration. In ACL, page 159, 2004. [8] V. Sowmya, M. Choudhury, K. Bali, T. Dasgupta, and A. Basu. Resource creation for training and testing of transliteration systems for indian languages. In LREC, pages 2902-2907, 2010. [9] H. Surana and A. K. Singh. A more discerning and adaptable multilingual transliteration mechanism for indian languages. In COLING-ACL, pages 64-71. India, 2008. [10] Kanika Gupta and Monojit Choudhury and Kalika Bali. Mining Hindi-English Transliteration Pairs from Online Hindi Lyrics. In Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC '12), pages 2459-2465, Istanbul, Turkey, 2012. [11] Spandana Gella, Kalika Bali, and Monojit Choudhury, "ye word kis lang ka hai bhai?" Testing the Limits of Word level Language Identification, NLPAI, December 2014. [12] Gokul Chittaranjan, Yogarshi Vyas, Kalika Bali, Monojit Choudhury. Word-level Language Identification using CRF: Code-switching Shared Task Report of MSR India System. In Proceedings of The First Workshop on Computational Approaches to Code Switching, pages 73–79, 2014. [13] Taku Kudo. Crf++: Yet another crf toolkit. 2014. [14] Utsab Barman, Amitava Das, Joachim Wagner, and Jennifer Foster. Code-Mixing: A Challenge for Language Identification in the Language of Social Media. The 1st Workshop on Computational Approaches to Code Switching, EMNLP 2014 , pages 13–23, Doha, Qatar, October, 2014. [15] Amitava Das and Bjorn Gambäck. Code-Mixing in Social Media Text: The Last Language Identification Frontier? Traitement Automatique des Langues (TAL): Special Issue on Social Networks and NLP , TAL Volume 54 – no 3/2013, Pages 41-64. 2014. [16] Utsab Barman, Joachim Wagner, Grzegorz Chrupałay and Jennifer Foster. 2014. DCU-UVT: Word-Level Language Classification with Code-Mixed Data. In EMNLP, 2014. [17] Somnath Banerjee, Aniruddha Roy, Alapan Kuila, Sudip Kumar Naskar, Sivaji Bandyopadhyay, Paolo Rosso. A Hybrid Approach for Transliterated Word-Level Language Identification: CRF with Post Processing Heuristics. In Proceedings of shared task on transliterated search, FIRE 2014. [18] Pieter Muysken. The study of code-mixing. In Bilingual Speech: A typology of Code-Mixing. Cambridge University Press. 2001. [19] Shana Poplack. Sometimes i’ll start a sentence in Spanish y termino en espanol: Toward a typology of code-switching. Linguistics, 18:581–618. 1980.</s>
<s>See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/339655969A Bengali Text Generation Approach in Context of Abstractive TextSummarization Using RNNChapter · March 2020DOI: 10.1007/978-981-15-2043-3_55CITATIONSREADS1915 authors, including:Some of the authors of this publication are also working on these related projects:Decision Support System View projectInceptB: A CNN Based Classification Approach for Recognizing Traditional Bengali Games View projectSheikh AbujarDaffodil International University52 PUBLICATIONS 83 CITATIONS SEE PROFILEAbu Kaisar Mohammad MasumDaffodil International University8 PUBLICATIONS 1 CITATION SEE PROFILEMd. Sanzidul IslamDaffodil International University18 PUBLICATIONS 41 CITATIONS SEE PROFILEFahad FaisalUniversidade de Évora14 PUBLICATIONS 9 CITATIONS SEE PROFILEAll content following this page was uploaded by Abu Kaisar Mohammad Masum on 15 April 2020.The user has requested enhancement of the downloaded file.https://www.researchgate.net/publication/339655969_A_Bengali_Text_Generation_Approach_in_Context_of_Abstractive_Text_Summarization_Using_RNN?enrichId=rgreq-97de8b061753caea82b37311bfeb7d5e-XXX&enrichSource=Y292ZXJQYWdlOzMzOTY1NTk2OTtBUzo4ODA1NjM3Mjg2ODMwMTlAMTU4Njk1NDE2MTg4NQ%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/339655969_A_Bengali_Text_Generation_Approach_in_Context_of_Abstractive_Text_Summarization_Using_RNN?enrichId=rgreq-97de8b061753caea82b37311bfeb7d5e-XXX&enrichSource=Y292ZXJQYWdlOzMzOTY1NTk2OTtBUzo4ODA1NjM3Mjg2ODMwMTlAMTU4Njk1NDE2MTg4NQ%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Decision-Support-System-12?enrichId=rgreq-97de8b061753caea82b37311bfeb7d5e-XXX&enrichSource=Y292ZXJQYWdlOzMzOTY1NTk2OTtBUzo4ODA1NjM3Mjg2ODMwMTlAMTU4Njk1NDE2MTg4NQ%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/project/InceptB-A-CNN-Based-Classification-Approach-for-Recognizing-Traditional-Bengali-Games?enrichId=rgreq-97de8b061753caea82b37311bfeb7d5e-XXX&enrichSource=Y292ZXJQYWdlOzMzOTY1NTk2OTtBUzo4ODA1NjM3Mjg2ODMwMTlAMTU4Njk1NDE2MTg4NQ%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-97de8b061753caea82b37311bfeb7d5e-XXX&enrichSource=Y292ZXJQYWdlOzMzOTY1NTk2OTtBUzo4ODA1NjM3Mjg2ODMwMTlAMTU4Njk1NDE2MTg4NQ%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Sheikh_Abujar?enrichId=rgreq-97de8b061753caea82b37311bfeb7d5e-XXX&enrichSource=Y292ZXJQYWdlOzMzOTY1NTk2OTtBUzo4ODA1NjM3Mjg2ODMwMTlAMTU4Njk1NDE2MTg4NQ%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Sheikh_Abujar?enrichId=rgreq-97de8b061753caea82b37311bfeb7d5e-XXX&enrichSource=Y292ZXJQYWdlOzMzOTY1NTk2OTtBUzo4ODA1NjM3Mjg2ODMwMTlAMTU4Njk1NDE2MTg4NQ%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Daffodil_International_University?enrichId=rgreq-97de8b061753caea82b37311bfeb7d5e-XXX&enrichSource=Y292ZXJQYWdlOzMzOTY1NTk2OTtBUzo4ODA1NjM3Mjg2ODMwMTlAMTU4Njk1NDE2MTg4NQ%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Sheikh_Abujar?enrichId=rgreq-97de8b061753caea82b37311bfeb7d5e-XXX&enrichSource=Y292ZXJQYWdlOzMzOTY1NTk2OTtBUzo4ODA1NjM3Mjg2ODMwMTlAMTU4Njk1NDE2MTg4NQ%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Abu_Mohammad_Masum?enrichId=rgreq-97de8b061753caea82b37311bfeb7d5e-XXX&enrichSource=Y292ZXJQYWdlOzMzOTY1NTk2OTtBUzo4ODA1NjM3Mjg2ODMwMTlAMTU4Njk1NDE2MTg4NQ%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Abu_Mohammad_Masum?enrichId=rgreq-97de8b061753caea82b37311bfeb7d5e-XXX&enrichSource=Y292ZXJQYWdlOzMzOTY1NTk2OTtBUzo4ODA1NjM3Mjg2ODMwMTlAMTU4Njk1NDE2MTg4NQ%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Daffodil_International_University?enrichId=rgreq-97de8b061753caea82b37311bfeb7d5e-XXX&enrichSource=Y292ZXJQYWdlOzMzOTY1NTk2OTtBUzo4ODA1NjM3Mjg2ODMwMTlAMTU4Njk1NDE2MTg4NQ%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Abu_Mohammad_Masum?enrichId=rgreq-97de8b061753caea82b37311bfeb7d5e-XXX&enrichSource=Y292ZXJQYWdlOzMzOTY1NTk2OTtBUzo4ODA1NjM3Mjg2ODMwMTlAMTU4Njk1NDE2MTg4NQ%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Islam939?enrichId=rgreq-97de8b061753caea82b37311bfeb7d5e-XXX&enrichSource=Y292ZXJQYWdlOzMzOTY1NTk2OTtBUzo4ODA1NjM3Mjg2ODMwMTlAMTU4Njk1NDE2MTg4NQ%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Islam939?enrichId=rgreq-97de8b061753caea82b37311bfeb7d5e-XXX&enrichSource=Y292ZXJQYWdlOzMzOTY1NTk2OTtBUzo4ODA1NjM3Mjg2ODMwMTlAMTU4Njk1NDE2MTg4NQ%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Daffodil_International_University?enrichId=rgreq-97de8b061753caea82b37311bfeb7d5e-XXX&enrichSource=Y292ZXJQYWdlOzMzOTY1NTk2OTtBUzo4ODA1NjM3Mjg2ODMwMTlAMTU4Njk1NDE2MTg4NQ%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Islam939?enrichId=rgreq-97de8b061753caea82b37311bfeb7d5e-XXX&enrichSource=Y292ZXJQYWdlOzMzOTY1NTk2OTtBUzo4ODA1NjM3Mjg2ODMwMTlAMTU4Njk1NDE2MTg4NQ%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Fahad_Faisal2?enrichId=rgreq-97de8b061753caea82b37311bfeb7d5e-XXX&enrichSource=Y292ZXJQYWdlOzMzOTY1NTk2OTtBUzo4ODA1NjM3Mjg2ODMwMTlAMTU4Njk1NDE2MTg4NQ%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Fahad_Faisal2?enrichId=rgreq-97de8b061753caea82b37311bfeb7d5e-XXX&enrichSource=Y292ZXJQYWdlOzMzOTY1NTk2OTtBUzo4ODA1NjM3Mjg2ODMwMTlAMTU4Njk1NDE2MTg4NQ%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Universidade_de_Evora?enrichId=rgreq-97de8b061753caea82b37311bfeb7d5e-XXX&enrichSource=Y292ZXJQYWdlOzMzOTY1NTk2OTtBUzo4ODA1NjM3Mjg2ODMwMTlAMTU4Njk1NDE2MTg4NQ%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Fahad_Faisal2?enrichId=rgreq-97de8b061753caea82b37311bfeb7d5e-XXX&enrichSource=Y292ZXJQYWdlOzMzOTY1NTk2OTtBUzo4ODA1NjM3Mjg2ODMwMTlAMTU4Njk1NDE2MTg4NQ%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Abu_Mohammad_Masum?enrichId=rgreq-97de8b061753caea82b37311bfeb7d5e-XXX&enrichSource=Y292ZXJQYWdlOzMzOTY1NTk2OTtBUzo4ODA1NjM3Mjg2ODMwMTlAMTU4Njk1NDE2MTg4NQ%3D%3D&el=1_x_10&_esc=publicationCoverPdfA Bengali Text Generation Approach in Context of Abstractive Text Summarization using RNN Sheikh Abujar , Abu Kaisar Mohammad Masum, Md. Sanzidul IslamFahad Faisal, Syed Akhter HossainDept. of CSE , Daffodil International University, Dhaka, Bangladesh {sheikh.cse, mohammad15-6759, sanzidul15-5223, fahad.cse}@diu.edu.bd, aktarhossain@daffodilvarsity.edu.bd Abstract. Automatic text summarization is one of the mentionable research area of natural language processing. The amount of data is increasing rapidly as well the necessity of understanding the gist of any text is just a mandatory tools, now a days. The area of text summarization has been developing since many years. Mentionable research has been already done through extractive summarization approach, in other side- abstractive summarization approach, is the way to summarize any text as like – human. Machine will be able to provide a new type of summarization, where the understanding of given summary may found as like as human generated summary. Several research development has already been done for Abstractive summarization in English language. This paper shows, a necessary method – “text generation” in context of Bengali abstractive text summarization development. Text generation, helps the machine to understand the pattern of human written text and then produce the output as is human written text. A basis Recurrent Neural Network (RNN) has been applied for this text generations approach. The most applicable and successful RNN - long shortterm memory (LSTM) has been applied. Contextual tokens has been used for the better sequence prediction. The proposed method has been developed in context of making it useable for further development of abstractive text summarization. Keywords: Natural Language Processing, Deep Learning, Text Pre-processing, Text Generation, Abstractive text summarization. Bengali text summarization. 1 Introduction Machine learning and Data mining algorithms performs better after using a large number of labeled dataset. The importance of using a large dataset is- it helps the machine to understand the pattern based on any specific or general requirements, form that dataset and be able to produce the better output. Text summarization is one the most necessary branch of Natural language processing research. Extractive text summarization relies on the frequency, word and/or sentence repetitive nature, several word/sentence scoring methods and some other lexical analysis [1]. Text summarization in both English and Bangla language has already been successfully developed. But this type of summarization may not useful in this days. The necessity of doing research</s>
<s>in now a days, is to develop machines to understand the context of any information given and be able to produce the summary based on its understanding. This type of summarization are in a stage of making comparison with human generated summaries, and it’s called abstractive summarization. Now a days, information in every language is available in internet or offline. Major research on this domain has done for English language and a few in Bengali language. The Bengali language has several limitations in data preprocessing. The best way of overcoming several problems is converting those text into Unicode [2]. Dataset is the major contribution towards successful research outcome, large scale of labeled dataset is must for this purpose.RNN process the sequence data very well because of its recurrent structure. Its hidden units are updated all the time and it has no limitation in sequence length. Both forward and backward computation helps the neurons to understand the sequence [3]. Majority of classifiers cannot provide expected result if the dataset is very small.For, Abstractive summarization, machine requires the understanding of human written text structure. From where, machine can understand the pattern of human written text. Based on this understanding – machine will provide summary by its own. To write a new sentence, machine needs to use its previous learning patterns of human wrote sentence. In this way, those predicted summary contains - incomplete sentences, may get the complete and corrected form. This paper represent the research implementation of text generation. The entire preprocessing, dataset structures and results has been discussed. 2 Literature Review LSTM is most widely used RNN model in current days [4], here based on contextual token, every neuron gates helps the model in the process of predicting the next pattern more accurately. With this consequence, bidirectional LSTM model has been built [5]. When there are so many variety in data sequence, this type of LSTM helps to generate sequence data as well made the entire model more easy and useful. Several direct or embedded sentence generation has been done using LSTM [6], Sequence Generative adversarial nets (SeqGan) used Monte Carlo method to identify the next predicted token. This method has also applied using neural network decoder with domain based knowledge for dialogue generation [7]. Sentence generation is entirely a decision making system, and a computational representation any information, which requires to understand the sequence of data in many forms. It follows a goal oriented method, Several Reinforcement learning models has applied for sentence generation [8], such as - actor-critic algorithm. Ho et al. [9] explained the relation between GAN and inverse Reinforcement learning. Hu and Yang et al. [10], has done several research to reduce the loss between input and target output data using encoderdecoder problem, recently VAE achieved outstanding result. Abstractive methods required deep investigation of the given text as input and extract the knowledge purpose of generating new sentence. Tanaka et al. [11] has explained several content selection technique as well rewriting methods. With improvement consequence of</s>
<s>sentence generation, abstractive methods will be more accurate and machines will be able to predict and be able to complete writing the whole sentence considering the predicted contextual tokens. Text generation is significant for the arrangement to grouping word order. This paper we attempted to clarify a technique for how to create Bengali word next succession utilizing LSTM and RNN. Genuine utilization of the Text generations a machine interpretation for the Bengali language. 3 Methodology Language modelling is the most important part of modern NLP. There is some part of the task such as text summarization, machine translation, text generation, speech to text generation etc. Text generation is a significant part of Language Modelling. A well-trained language model does acquire knowledge of the probability of the event of a word based on the previous series of words. In this paper, we discussed n-gram language modelling for text generation and create a Recurrent Neural Network for training model. In figure1 has been given our work methodology flow. Figure1: Working flow for Text Generation A. Data collect & pre-processing Since we are working with Bengali text so need a good dataset. We use our own dataset which was collect from social media. Our dataset contains several types of Bengali post such group post, personal post, page post etc. There is some obstacle to collect Bengali data such as the structure of Bengali text. But in our dataset, we try to reduce all of that obstacle to keep a pure Bengali text. Our dataset contains text data with their type and text summary. For our working purpose, we use only text and their summary to generate a sequence of next Bengali word. Before prepare dataset for text generation, we need to add Bengali contractions. Because contractions contain a short form of a word such as "বি.দ্র"="বিশেষ দ্রষ্টিয", "ড."="ডক্টর". After collecting dataset we need to a clean dataset to generate text. So for clean data, we remove whitespace, digits, punctuation from Bengali text and remove Bengali stop words from a Bengali stop word text file. Finally, we clean the text and create a list which contains text with their summary. Then we create a corpus for text generation. B. N-gram Tokens Sequence For text generation language model required a sequence of the token and which can predict the probability next word or sequence. So need to tokenize the words. We use keras build in tokenize model which extract word with their index number from the corpus. After this, all text transforms the sequence of the token. In n-gram, the sequence contains integer number token which was made from the input text corpus. Every integer number represent the index of the word which is in the text vocabulary. Example given in table 1. Table1: Example of n-gram sequence token C. Pad Sequence Every sequence has a different length. So we need to pad sequence for making sequence length equal. For this intention, we use keras pad sequence function. The input of the learning model we</s>
<s>use n-gram sequence as given word and the predicted word as next word. Example given in table 2. Finally, we can do acquire the input X and the next word Y which is used for training model N-GRAM TEXT TOKEN SEQUENCE হাইশেক পাকক [103,45] হাইশেক পাকক বির্ কাণ [103,45,10] হাইশেক পাকক বির্ কাণ কাজ [103,45,10,24] হাইশেক পাকক বির্ কাণ কাজ হাশে [103,45,10,24,33] হাইশেক পাকক বির্ কাণ কাজ হাশে বিশেশে [103,45,10,24,33,67] হাইশেক পাকক বির্ কাণ কাজ হাশে বিশেশে সরকার [103,45,10,24,33,67,89] Table2: Example of pad sequence D. Proposed Model A recurrent neural network works extremely goods for sequential data. Because it's can remember it output cause of exterior memory. It can predict upcoming next sequence using memory and also deep understanding with its sequence compared to other algorithms. When it can consider the current state also can remember what it learns from the previous state. RNN has the long short term memory (LSTM) that helps to remember the previous sequence. Generally, Recurrent Neural Network has two input one is its present input and another is recent previous. Because remember the sequence current input and previous input both help to generate a complete text. RNN apply weights of the sequence as input with time and produce weights of next sequence as output. Figure2: Recurrent Neural Network The formula will be, H = 𝜎(𝑊ℎ ∗ X) (1) Y = Softmax(𝑊𝑦 ∗ H) (2) Here, 𝜎 = Activation Function X = Input ,Y = Output, H = Hidden State, W = Weight In our proposed model, we use the weight (w) of text sequence as input with the time (t).LSTM cell can store previous input state and then working with the current state. In figure3 shows, the input is a previous state and is the current state. When working in the current state in can remember previous then using activation function it can predict the next word or sequence. For train our model we define keras sequential model and embedding the total word with input sequence. Define LSTM with 256 units and 0.5 dropouts. Add Dense which is equal of the total word and use softmax activate function. For loss function calculation we use 'categorical crossentropy' and use 'Adam' optimization function. Given word Next Word জযাশর্র জযাশর্র জিয জযাশর্র জিয জযাশর্র জিয এক্সার্ জযাশর্র জিয এক্সার্ জযাশর্র জিয এক্সার্ বর্স Algorithm1 for Bengali text generation 1: Set function model create(max sequence length, total word): 2: declare Sequential() 3: add(Embedding(total word, number of word, input word length)) 4: add(LSTM(size)) 5: add(Dropout(value)) 6: add(Dense(total words, activation function)) 7: compile(loss, optimization) 8: return model 9: create model(max sequence length, total words) This segment we demonstrate our model graphical view. Here remarkable id is the contribution of the procedure will proceed to the Dense or yield layer. Figure3: Visualizing LSTM Model structure In figure 4 shows, a short view of the working model, here lstm can store the previous sequence. When working with the current state and find the next sequence its use the activation function. Softmax activation calculate</s>
<s>the probability and keep only the correct next sequence. Figure4: View of Proposed Model. i. Long Short Term Memory: Long Short Term Memory is a part of the Recurrent Neural Network. It's used to disappearance of gradient and abolishes gradient. Every LSTM cell has three gates such as Input Gate, Forget Gate, Output Gate and a cell state which added information via the gates. it = 𝜎(wi[ht−1, xt] + bi) (3) f𝑡 = 𝜎(𝑤f[ℎt−1, 𝑥𝑡] + 𝑏f) (4) o𝑡 = 𝜎(𝑤o[ℎt−1, 𝑥𝑡] + 𝑏o) (5) 𝑐𝑡 = 𝑓𝑡 ∗ 𝑐𝑡−1 + 𝑖𝑡 ∗ 𝜎(𝑤𝑐[ℎt−1, 𝑥𝑐] + 𝑏𝑐) (6) ℎ𝑡 = 𝑜𝑡 ∗ σ(𝑐𝑡) (7) Here, 𝑖𝑡 = input gate′s, 𝑓𝑡 = forget gate's , 𝑜𝑡 = output gate, 𝑐𝑡 = cell state, ℎ𝑡 = hidden state, 𝜎 = activation function ii. Activation function: The softmax function is the logistic activation function, which is used to deal with classification problems. It maintains the output between 0 and 1 calculations probability. The formula for softmax function is, 𝜎(Z)𝑗 = ∑ 𝑒𝑧𝑘𝑘𝑘=1 (8) Here, z is the inputs to the output layer and j indexes the output. Experiment and Output After creating the model function we need to train our model. We fit the model with the current and next word. Set the epochs size 150 and set verbose= 2.Train model almost 3 hours it gives a better accuracy 97% with loss 0.0132.Figure4 shows model train accuracy graph and figure5 show loss graph of the model. Fig 4: Model Accuracy graph Fig5: Model loss graph Previously several research work completed for English text generation with one direction RNN or LSTM. But in the Bengali language, very few research work completed using LSTM for Text generation. This paper we applied a method and process Bengali text for text generation and provide a better output. Algorithms perform given below in the table3 Table3: Comparison with using LSTM and general LSTM This experiment our main goal to create the next sequence of words. For output, we have created a function where we set a token list, seed text for showing output. We have fixed the seed word and set the length of predictor next word, call the model with maximum sequence length.Table4 shows our experiment result. Table4: Bengali Text Generation Conclusion and Future work We have proposed a good method for generating an automatic Bengali text generation. Since no model gives accurate result but our model provides better output and maximum output is accurate. Using our proposed model we have easily generated a fixed length and meaning full Bengali text. There are some limitations this paper such as can not generate text without given the length of the text and n-gram sequence defined needed which is a lengthy process. Sometimes the order of the sentence is not correct in giving output. There are some defects in our proposed methodology such as can not generate random length text. We need to define the generating text length. Another defect is we need to define pad token for predict</s>
<s>next words. In our future work, we will make an automatic text generator which provides a random length Bengali text without using any token or sequence. Acknowledgment We would like to give thanks to our DIU-NLP and Machine Learning Research Lab for providing all research facility and guidance. We would also give special thanks to our Computer science and engineering department to support in completing our research. References [1] Abujar S et al (2017) A heuristic approach of text summarization for Bengali documentation. In: 8th IEEE ICCCNT 2017, IIT Delhi, Delhi, India, 3–5 July 2017 [2] Abujar S, Hasan M (2016) A comprehensive text analysis for Bengali TTS using Unicode. In: 5th IEEE international conference on informatics, electronics and vision (ICIEV), Dhaka, Bangladesh, 13–14 May 2016 Approach Accuracy Loss General LSTM 93% 0.01793 Using LSTM 97% 0.0132 Given Text Output হাইশেক পাকক হাইশেক পাকক বির্ কাণ কাজ হাশে বিশেশে সরকার উজ্জ্বল অশথ কর উজ্জ্বল অশথ কর প্রশোজশি বর্থযা সংিাদ প্রচার কশর [3] Mike Schuster and Kuldip K Paliwal. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11):2673–2681, 1997. [4] Sepp Hochreiter and Ju¨rgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997. [5] Alex Graves, Santiago Fern´andez, and J¨urgen Schmidhuber. Bidirectional lstm networks for improved phoneme classification and recognition. Artificial Neural Networks: Formal Models and Their Applications–ICANN 2005, pages 753–753, 2005. [6] Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. Seqgan: sequence generative adversarial nets with policy gradient. arXiv preprint arXiv:1609.05473, 2016. [7] Jiwei Li, Will Monroe, and Dan Jurafsky. Learning to decode for future success. arXiv preprint arXiv:1701.06549, 2017. [8] Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction, volume 1(1). MIT press Cambridge, 1998. [9] Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. In Advances in Neural Information Processing Systems, pages 4565–4573, 2016. [10] Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric PXing. Controllable text generation. arXiv preprint arXiv:1703.00955, 2017. [11] Hideki Tanaka, Akinori Kinoshita, Takeshi Kobayakawa, Tadashi Kumano, and Naoto Kato. 2009. Syntaxdriven sentence revision for broadcast news summarization. In Proceedings of the 2009Workshop on Language Generation and Summarisation, UCNLG+Sum ’09, pages 39–47, Stroudsburg, PA, USA. Association for Computational Linguistics. [12] Sanzidul Islam, et al. "Sequence-to-sequence Bangla Sentence Generation with LSTM Recurrent Neural Networks." Procedia Computer Science 152 (2019): 51-58. View publication statsView publication statshttps://www.researchgate.net/publication/339655969</s>
<s>International Conference on Bangla Speech and Language Processing(ICBSLP), 21-22 September, 2018Pipilika N-gram Viewer: An Efficient Large ScaleN-gram Model for BengaliAdnan Ahmad∗, Mahbubur Rub Talha∗, Md. Ruhul Amin∗, Farida Chowdhury∗∗Search Engine Pipilika∗Department of Computer Science & Engineering∗Shahjalal University of Science & TechnologySylhet, Bangladesh{sust.adnan, talha13, shajib.sust, deeba.bd}@gmail.comAbstract—In this paper, we introduce a large-scale Bengali N-gram model, trained on online newspaper corpus and presentresults and analysis of two different experiments done by usingthe model, namely Context-aware spell checker and Trendingtopic detection. We also present the process with emphasis onthe problems that arise in working with data at this scale.One signicant aspect of our N-gram model is that the modelcontains information of N-gram occurrence per day over aperiod of eight years, from the year 2009-2017. This enablesfurther applications of the model, for example, Trending topicdetection. Our Bengali N-gram language model contains N-gramsup to 5-gram with more than 2 million unique Unigrams andover 656 million duplicate Unigrams. We evaluate our model bycalculating the perplexities of different years. We obtain F-scoreof 86.6% in an experiment of Context-aware spell checker. Inanother experiment, we successfully detected the trending topicsof a given time frame. This paper also presents first Bengali N-gram viewer, where one can query by a particular N-gram andsee the resulting graph of the frequency of that term occurredusing different time frames.Keywords— Bengali N-gram Model, Bengali N-gram Viewer, Ben-gali Context-aware Spellchecker, Bengali Trending Topic DetectionI. INTRODUCTIONFor low resource languages like Bengali, where the amount ofdigital text or web content is not very large compared to other majorlanguages, available amount of data in Newspaper corpora makesit attractive for many natural language processing tasks, such aslanguage modeling. Web-scale language models have been shownto improve performance of many language processing tasks, suchas Spell-checker, Machine translation, Automatic speech recognitionand information retrieval etc. [1] [2] [3]In this paper, we introduce the largest N-gram model for Bengaliwhich is prepared using large-scale web content, ranging from 13different online newspapers, including news articles from year 2009to 2017. This large amount of data is crawled, preprocessed and usedas train data for the N-gram model. The model contains frequency perday for every N-gram up to 5-gram, which required efficient large-scale computational solutions. Also, indexing technique is applied forfast and efficient retrieval of N-gram frequencies from the model. Inan addition, our work also contains a N-gram viewer, where one cansee the occurrence of a N-gram over a particular time period and aN-gram API1 for querying in the model. Although the idea of N-gramviewer is much more similar like Google N-gram viewer2, there aretwo major aspects where our N-gram viewer differs from it. Firstly,Google N-gram viewer is based on books, where our N-gram isbased on daily newspapers. Secondly, Google N-gram viewer showsa normalized value of N-gram frequency in a per year time framewhere our N-gram model contains frequency per day which requiresmore computations. But this feature enables us the opportunity touse this model in tasks like trending topic detection which requiresN-gram frequency of much smaller time frame.Later, to evaluate our model, we calculate the perplexity of themodel which is</s>
<s>a standard evaluation method for language models.We also use it to perform multiple NLP tasks, namely context-aware spell checker and Trending topic detection. We include theperformance results of these tasks and show the efficiency of themodel. This model can be used in many farther NLP tasks byresearchers, so we decided to make our N-gram model publiclyavailable.The rest of the paper is arranged in following manner. SectionII includes some background studies of large-scale N-gram modelsfor both Bengali and English. Section III contains the details of datacollection and preprocessing. Later, section IV contains the modelgeneration and section V contains model evaluation. Finally, weconclude our work in section 6.II. BACKGROUNDThe first edition of the Google Books N-gram Corpus is introducedhere [4] and they used it to quantitatively analyze a variety of topicsranging from language growth to public health. Google also releasedN-gram viewer, which has become a popular tool for examininglanguage trends. The corpus contains both words and phrases andtheir frequency over time. Currently Google Books N-gram Corpuscontains over 8-million books, or 6% of all books ever published [5].The Google Books N-gram Corpus has been available since 2010,currently contains 8 languages, however it doesn’t contain Bengali.On the other hand, there is no available large-scale Bengali N-grammodel till date.1https://developers.pipilika.com/ngram, Accessed on June 02, 20182https://books.google.com/ngrams, Accessed on June 02, 2018978-1-5386-8207-4/18/$31.00 ©2018 IEEEIII. DATA PREPARATIONIn the fields of computational linguistics and probability, N-gramis a contiguous sequence of n items from a given sequence of text orspeech. An N-gram of size 1 is referred to as a Unigram; size 2 as aBigram and size 3 as a Trigram. Larger sizes are sometimes referredto by the value of n in modern language, e.g., four-gram, five-gram,and so on. Using the statistical properties of N-grams, one can buildN-gram language model. Our primary goal is to create a model thatcan be used to be queried for language model probability. To achievethat goal, we need to compute our model on large amount of data.We choose online newspaper contents as our primary source of data.A. N-gram data collectionBengali is a low resource language. Unlike major languages likeEnglish or Chinese, there isnt much data available for Bengali inonline. The main source for data is Newspaper, Blogs, Online portalsand Wikipedia. As the goal is to create an N-gram model as wellas N-gram viewer to retrieve and show frequencies of N-grams perday, newspaper data is ideal for that. We collected articles from 13different Bengali online newspapers, in a range of year 2009 to 2017.A chart representing number of words per news website is given infigure 1.Fig. 1. Number of words per news website.Our data contains duplicate sentences and that includes a risk ofhaving a certain amount of redundant data, for example, copyrighttext, newspaper address etc. We do not remove duplicate sentences asthat may result in removing valuable sentences from the original data.Also, we don’t include blog or Wikipedia data as these data do nothave the properties of daily frequency count. A graph of frequencyspectrum is shown in figure 2.We limit our graph to 50 spectrum as a</s>
<s>spectrum is often charac-terized by very high values corresponding to the lowest frequencyclasses, and a very long tail of frequency classes with only onemember. Thus, a full spectrum plot on a non-logarithmic scale willalways have the rather uninformative L-shaped profile. So, we plotit on a log-log scale. The same distribution shows itself to be linearin figure 3. This is the characteristic signature of a power-law.In order to develop an intuition about how rapidly vocabulary sizeis growing, Vocabulary Growth Curve (VGC) is given in figure 4. AVocabulary Growth Curve reports vocabulary size (number of types,V) as a function of sample size (number of tokens,N).Another graph shows the log-log relationship between the rankand frequency of unique words in the corpus in figure 5. This graphis generated in order to understand how terms are distributed acrossdocuments. Zipf’s law is a commonly used model of the distributionof terms in a collection.The graph shows that the fit of the data tothe law is good enough to serve as a corpus for language model.Fig. 2. Plot of the first 50 spectrum elements with the X axis on a logarithmicscale.Fig. 3. Log-log scale plot of the distribution of frequency among the corpus.Fig. 4. Vocabulary Growth Curve(VGC) of the N-gram data.IV. LANGUAGE MODEL AND N-GRAM VIEWERAfter collecting data and prepossessing it, we create our N-gramlanguage model. We calculate frequency per day up to 5-gram. Asthe whole process is computationally expensive, we had to find andapply an elegant engineering solution. we use a comparatively fastapproach using multithreading and NoSQL3 rather than traditionalapproach using single thread and SQL. Later we index calculated3https://www.mongodb.com/, Accessed on June 02, 2018Fig. 5. Zipf’s law between the rank and frequency of unique words in thecorpus.N-grams using solr4 indexer for fast retrieval.We also create a visualizer with an interactive graphical userinterface(GUI)5 where one can query and view the graph of N-gramfrequencies of a particular N-gram across a given time frame. Also,one can compare the frequency distribution of multiple N-grams bysimply querying comma separated N-grams. An example of suchvisualization is given in figure 6.Fig. 6. N-gram viewer resulting graph of frequency per query.V. EVALUATION OF THE MODELThere are two kinds of evaluation for language models, namely ex-trinsic and intrinsic evaluation. To evaluate Bengali N-gram languagemodel, we show both kind of evaluation. For intrinsic evaluation, wecalculate the perplexity of our model in a test set of previously unseen36556 sentences containing 399271 words. For extrinsic evaluation,we perform two natural language processing tasks, namely context-aware spell checker and trending topic detection.A. PerplexityIn information theory, perplexity is a measurement of how well aprobability distribution or probability model predicts a sample. It isused to compare probability models. A low perplexity indicates the4lucene.apache.org/solr/, Accessed on June 02, 20185https://developers.pipilika.com/ngram, Accessed on June 02, 2018probability distribution is good at predicting the sample. Perplexityis,Perplexity(C) = 2−i=1 log p(si) (1)where p(si) is the N-gram probability of ith sentence, N is thenumber of words in test set, m is the number of sentences. Here,the log is base 2. We calculate perplexity for Unigram, Bigram andTrigram model for same</s>
<s>test set with a corpus of 36556 sentencescontaining 399271 words. For Unigram probability calculation, wemultiply the factor|vocabulary|with each probability as described here [6]. To deal with the probabil-ity of unknown words p(unk), we used Laplace smoothing [7]. Theresults of perplexities from different year from year 2009 to 2017 aswell as the overall perplexity of the model is given in Table I.TABLE IPERPLEXITY OF UNIGRAM, BIGRAM AND TRIGRAM MODEL ACCROSSDIFFERENT YEARS.Perplexity Unigram Bigram Trigram2009-2017 1425.30 180.53 11.012009 2093.63 128.49 5.302010 2079.77 133.99 5.942011 2069.72 134.95 5.932012 1505.37 144.37 6.602013 1202.32 146.34 6.672014 901.23 157.25 7.802015 896.60 153.40 7.722016 1473.90 113.63 5.382017 605.17* 173.07 8.92[*This Unigram perplexity value is relatively low, because of 3 month’s ofdata is missing in our dataset of year 2017]There are many tasks in Natural language Processing where N-gram language model can be used either to perform a task or toimprove any task. Here, we are giving two examples of such tasks,nemely context-aware spell checker and trending topic detection.B. Context-aware Spell checkerN-gram model has been used to improve the result of spell checker.An introduction of N-gram based automatic spelling correction toolto improve information retrieval effectiveness can be found here [8].In another approach [9] researchers applied memory-based learningtechniques to the problem of correcting spelling mistakes in text usinga very large database of token N-gram occurrences. They used webtext as training data.We use N-gram language model in order to develop a context-sensitive spell checking for Bengali. Most spell checkers only checkwords in isolation and determine whether they are spelled correctly.However, a large number of spelling errors does not involve non-words but rather valid words that are used in invalid places insentences. Hence, spell checkers that only look at individual wordsare unable to detect many of the most common spelling mistakes.Such mistakes can only be detected by investigating word patterns,syntactic patterns, and collocations, making use of frequency infor-mation, and employing statistical models. In this experiment, we useour N-gram language model described in previous section in orderto develop a context-sensitive spell checking for Bangali. This spellchecker can be used as an independent unit to scan texts and detecterrors as well as suggesting correct words.To create a spell checker for Bengali, the first problem is to find aproper dictionary of the words that has been used in common texts.Creation of such dictionary may take a lot of manual labour as itrequires digitalization of printed dictionary. We use an alternativemethod to create such dictionary automatically by a process, wherewe use unique Unigram of our N-gram model. To do so, we firstretrieve the unique Unigram with total counts of one year. After thatwe remove any noise from the Unigram data.As the Unigram are created using newspaper content, there aremisspelled words. But we assume that, misspelled words frequencyshould be low. So, we use a frequency threshold and take Unigramappearing at least 30 times in the corpus. Thus, we get a list of201749 words which we later use as a dictionary. One advantage ofsuch dictionary is that, it contains contemporary words as well aspopular names of persons,</s>
<s>places and organizations which are notpresent in the traditional dictionaries.There are two main steps for a successful spell checker. The firststep is spell checking and the second step is spell suggestion. Thespell checking process is straightforward; we consider a sentence andcheck each word of the sentence whether our dictionary contains thatword or not. This kind of linear checking of errors is computationallycostly, so we use an efficient algorithm combining Levenshteindistance and BKTree. Levenshtein distance is a string metric formeasuring the difference between two sequences [11]. Informally, theLevenshtein distance between two words is the minimum number ofsingle-character edits, insertions, deletions or substitutions required tochange one word into the other. A BK-tree is a metric tree suggestedin this paper [10]. BK-trees can be used for approximate stringmatching in a dictionary.Levenshtein distance metric commonly used when building a BK-tree. Once the tree is created, we can efficiently search to see whetherany word is in the dictionary or not. If the dictionary doesn’t containthat word, it is considered as a wrong or misspelled word. After that,we proceed to the second step, which is spell suggestion.For any misspelled word, we can retrieve word suggestions fromBKTree within a certain edit distance. Once we get a list of wordsas suggestions, we use our N-gram model for context-awareness tonarrow down the suggestion list and ranking final suggestions. Foreach word of the suggestion list we add the context words, Bigram orTrigram and query in our N-gram model for the frequency counts ofco-occurrences. We remove the words having 0 count, which meansthe context and the word never occurred together in our model. Afterthat, for the remaining words in suggestion list with frequency morethan 0, we sort the suggestion list and use as final suggestion. Forautomatic spell correction, one can use the top word of the suggestionlist.We evaluate our spell checker on a dataset of 100 sentences con-taining a total of 923 words and among them 134 are misspelled. Wemanually prepare this dataset, where we also have the correspondingcorrect word for each misspelled word. Now we use that dataset totest our spell checker. We try to detect misspelled words for eachsentence and predict the correct words and provide a list of shortspell suggestions. We check whether our suggested words containsthe corresponding correct word from the dataset. We calculate twoseparate F-score measures, one for only detection of the misspelledwords and another is for correction. We only suggest words for thecorrectly detected misspelled words.Following Table II represents the results of our context-aware spellchecker.TABLE IICONTEXT-AWARE SPELL CHECKER EVALUATION RESULTS.Precision Recall F-score AccuracyDetection .850 .805 .827 0.953Detection+Correction .836 .898 .866 0.968C. Trending Topic DetectionWe demonstrate another application using our Bengali N-grammodel, which is is trending topic detection. Our N-gram model isbased on newspaper data and it contains the frequency informationof each N-grams per day. That enables the opportunity of the tasktrending topic detection. For simplicity, we will only demonstrate theUnigram and Bigram trending topic detection. But the process shouldbe the same for any topic or N-gram.To detect the trending topics, we use a statistical method</s>
<s>calledChi-square test [12]. Chi-square test is a statistical hypothesis testassessing the goodness of fit between a set of observed values andthose expected theoretically. The formula for the chi-square statisticused in the chi square test is:χ̃2 =k=1(Ok − Ek)Where O is observed value and E is expected value. Now considera Unigram. In our case, the observed value for the Unigram is theaverage frequency of that Unigram in a window of 1 day for dailytrending topic detection, 7 days for weekly and 30 days for monthlytrending topic detection. The expected value is the overall averagefrequency of that particular N-gram in our model. In practice, wecan limit the expected value window size for the past 7 days fordaily trending topic detection, 30 for weekly and, 60 or 90 days formonthly trending topic detection.According to the method described above, we calculate the chi-square values for all the N-grams up to 5 grams of a particular timeframe of which we wish to detect the trending topics. We can ignorethe summation part of the equation as its mathematically convenient.We sort the N-grams according to their chi-square values and returnN number of entities with top chi-square values. To narrow down theresults, also marge the topics describing same event by matching thestrings.As the evaluation of trending topic detection requires a list ofvalid trending topics, currently we do not have any authentic sourceof such list. So, we evaluate our trending topic detector against thenational events like Victory day, Martyrs day and Independence day.We assume that, on these days, the trending topics of the newspapershould be related to these events. If our trending topic detectorcan successfully detect these topics as trending, we can concludethat our process is valid. Here are the results which represents thecorresponding top 5 trending topics(weekly) of year 2015 in TableIII.Another graph shows the frequency spectrum of the top trendingtopics when queried in N-gram viewer in figure 7. This graph clearlyindicates the bumping of the frequency that our model could detectas statistically significant.Fig. 7. N-gram viewer resulting graph of top trending topics.TABLE IIITRENDING TOPIC DETECTION (TOP 5) RESULTS.Independence day - 26 March, 2015Victory day - 16 December, 2015Martyrs day - 14 December, 2015VI. CONCLUSIONIn this paper, we present a large scale Bengali N-gram languagemodel based on online newspaper data and show its application inTrending topic detection and Context-aware spell checker. We alsopresent a N-gram viewer where one can query with a particular N-gram and and see the graphical presentation of its frequency overtime. We evaluate our model in both intrinsic and extrinsic manner.An API is also released where one can query such data. We alsoprovide a perplexity test of the model and evaluate our model inboth intrinsic and extrinsic manner.ACKNOWLEDGMENTThis work is partially funded by Access to Information (a2i) pro-gramme, which ran from the Prime Ministers Office of Bangladesh5.REFERENCES[1] T. Brants, A. C. Popat, P. Xu, F. J. Och, and J. Dean. 2007. Largelanguage models in machine translation. In Proceedings of the 2007Joint Conference on Empirical Methods in Natural Language Processingand Computational Language Learning, pages 858867[2] C.</s>
<s>Chelba and J. Schalkwyk, 2013. Empirical Exploration of LanguageModeling for the google.com Query Stream as Applied to Mobile VoiceSearch, pages 197229. Springer, New York.[3] D. Guthrie and M. Hepple. 2010. Storing the web in memory: Spaceefficient language models with constant time retrieval. In Proceedingsof EMNLP 2010, Los Angeles, CA.[4] Michel, J. B., Shen, Y. K., Aiden, A. P., Veres, A., Gray, M. K., Pickett, J.P., and Pinker, S. (2010). Quantitative analysis of culture using millionsof digitized books. science, 1199644.[5] Lin, Y., Michel, J. B., Aiden, E. L., Orwant, J., Brockman, W., andPetrov, S. (2012, July). Syntactic annotations for the google books ngramcorpus. In Proceedings of the ACL 2012 system demonstrations (pp.169-174). Association for Computational Linguistics.[6] M. Federico, N. Bertoldi, and M. Cettolo. 2008. IRSTLM: an opensource toolkit for handling large scale language models. In Proceedingsof Inter-speech, Brisbane, Australia[7] C.D. Manning, P. Raghavan and M. Schtze (2008). Introduction toInformation Retrieval. Cambridge University Press, p. 260.5https://a2i.gov.bd/, Accessed on June 02, 2018[8] Ahmed, F., Luca, E. W. D., and Nrnberger, A. (2009). Revised N-gram based automatic spelling correction tool to improve retrievaleffectiveness. Polibits, (40), 39-48.[9] Carlson, A., and Fette, I. (2007, December). Memory-based context-sensitive spelling correction at web scale. In Machine learning andapplications, 2007. ICMLA 2007. sixth international conference on (pp.166-171). IEEE.[10] W. Burkhard and R. Keller. Some approaches to best-match file search-ing, CACM, 1973[11] Levenshtein and Vladimir I. (February 1966). ”Binary codes capable ofcorrecting deletions, insertions, and reversals”. Soviet Physics Doklady.10 (8): 707710.[12] Pearson, K. (1900). On the criterion that a given system of deviationsfrom the probable in the case of a correlated system of variables is suchthat it can be reasonably supposed to have arisen from random sampling.Philosophical Magazine Series, 5, 50 (302): 157175.</s>
<s>Microsoft Word - 11 Mumin CSESee discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/332318483SUMono: A Representative Modern Bengali CorpusArticle · January 2014CITATIONSREADS2995 authors, including:Some of the authors of this publication are also working on these related projects:Machine Translation View projectBangla NLP Research View projectMohammad Abdullah Al MuminShahjalal University of Science and Technology13 PUBLICATIONS 54 CITATIONS SEE PROFILEMohammad Reza SelimShahjalal University of Science and Technology5 PUBLICATIONS 22 CITATIONS SEE PROFILEMuhammed Zafar IqbalShahjalal University of Science and Technology21 PUBLICATIONS 50 CITATIONS SEE PROFILEAll content following this page was uploaded by Mohammad Abdullah Al Mumin on 10 April 2019.The user has requested enhancement of the downloaded file.https://www.researchgate.net/publication/332318483_SUMono_A_Representative_Modern_Bengali_Corpus?enrichId=rgreq-80a6b146f8b5327fa7d94bce035905a6-XXX&enrichSource=Y292ZXJQYWdlOzMzMjMxODQ4MztBUzo3NDYwMTc5ODA5NDg0ODBAMTU1NDg3NTk1NTI5Mw%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/332318483_SUMono_A_Representative_Modern_Bengali_Corpus?enrichId=rgreq-80a6b146f8b5327fa7d94bce035905a6-XXX&enrichSource=Y292ZXJQYWdlOzMzMjMxODQ4MztBUzo3NDYwMTc5ODA5NDg0ODBAMTU1NDg3NTk1NTI5Mw%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Machine-Translation-36?enrichId=rgreq-80a6b146f8b5327fa7d94bce035905a6-XXX&enrichSource=Y292ZXJQYWdlOzMzMjMxODQ4MztBUzo3NDYwMTc5ODA5NDg0ODBAMTU1NDg3NTk1NTI5Mw%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Bangla-NLP-Research?enrichId=rgreq-80a6b146f8b5327fa7d94bce035905a6-XXX&enrichSource=Y292ZXJQYWdlOzMzMjMxODQ4MztBUzo3NDYwMTc5ODA5NDg0ODBAMTU1NDg3NTk1NTI5Mw%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-80a6b146f8b5327fa7d94bce035905a6-XXX&enrichSource=Y292ZXJQYWdlOzMzMjMxODQ4MztBUzo3NDYwMTc5ODA5NDg0ODBAMTU1NDg3NTk1NTI5Mw%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Mohammad_Mumin?enrichId=rgreq-80a6b146f8b5327fa7d94bce035905a6-XXX&enrichSource=Y292ZXJQYWdlOzMzMjMxODQ4MztBUzo3NDYwMTc5ODA5NDg0ODBAMTU1NDg3NTk1NTI5Mw%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Mohammad_Mumin?enrichId=rgreq-80a6b146f8b5327fa7d94bce035905a6-XXX&enrichSource=Y292ZXJQYWdlOzMzMjMxODQ4MztBUzo3NDYwMTc5ODA5NDg0ODBAMTU1NDg3NTk1NTI5Mw%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Shahjalal_University_of_Science_and_Technology?enrichId=rgreq-80a6b146f8b5327fa7d94bce035905a6-XXX&enrichSource=Y292ZXJQYWdlOzMzMjMxODQ4MztBUzo3NDYwMTc5ODA5NDg0ODBAMTU1NDg3NTk1NTI5Mw%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Mohammad_Mumin?enrichId=rgreq-80a6b146f8b5327fa7d94bce035905a6-XXX&enrichSource=Y292ZXJQYWdlOzMzMjMxODQ4MztBUzo3NDYwMTc5ODA5NDg0ODBAMTU1NDg3NTk1NTI5Mw%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Mohammad_Selim7?enrichId=rgreq-80a6b146f8b5327fa7d94bce035905a6-XXX&enrichSource=Y292ZXJQYWdlOzMzMjMxODQ4MztBUzo3NDYwMTc5ODA5NDg0ODBAMTU1NDg3NTk1NTI5Mw%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Mohammad_Selim7?enrichId=rgreq-80a6b146f8b5327fa7d94bce035905a6-XXX&enrichSource=Y292ZXJQYWdlOzMzMjMxODQ4MztBUzo3NDYwMTc5ODA5NDg0ODBAMTU1NDg3NTk1NTI5Mw%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Shahjalal_University_of_Science_and_Technology?enrichId=rgreq-80a6b146f8b5327fa7d94bce035905a6-XXX&enrichSource=Y292ZXJQYWdlOzMzMjMxODQ4MztBUzo3NDYwMTc5ODA5NDg0ODBAMTU1NDg3NTk1NTI5Mw%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Mohammad_Selim7?enrichId=rgreq-80a6b146f8b5327fa7d94bce035905a6-XXX&enrichSource=Y292ZXJQYWdlOzMzMjMxODQ4MztBUzo3NDYwMTc5ODA5NDg0ODBAMTU1NDg3NTk1NTI5Mw%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Muhammed_Iqbal5?enrichId=rgreq-80a6b146f8b5327fa7d94bce035905a6-XXX&enrichSource=Y292ZXJQYWdlOzMzMjMxODQ4MztBUzo3NDYwMTc5ODA5NDg0ODBAMTU1NDg3NTk1NTI5Mw%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Muhammed_Iqbal5?enrichId=rgreq-80a6b146f8b5327fa7d94bce035905a6-XXX&enrichSource=Y292ZXJQYWdlOzMzMjMxODQ4MztBUzo3NDYwMTc5ODA5NDg0ODBAMTU1NDg3NTk1NTI5Mw%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Shahjalal_University_of_Science_and_Technology?enrichId=rgreq-80a6b146f8b5327fa7d94bce035905a6-XXX&enrichSource=Y292ZXJQYWdlOzMzMjMxODQ4MztBUzo3NDYwMTc5ODA5NDg0ODBAMTU1NDg3NTk1NTI5Mw%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Muhammed_Iqbal5?enrichId=rgreq-80a6b146f8b5327fa7d94bce035905a6-XXX&enrichSource=Y292ZXJQYWdlOzMzMjMxODQ4MztBUzo3NDYwMTc5ODA5NDg0ODBAMTU1NDg3NTk1NTI5Mw%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Mohammad_Mumin?enrichId=rgreq-80a6b146f8b5327fa7d94bce035905a6-XXX&enrichSource=Y292ZXJQYWdlOzMzMjMxODQ4MztBUzo3NDYwMTc5ODA5NDg0ODBAMTU1NDg3NTk1NTI5Mw%3D%3D&el=1_x_10&_esc=publicationCoverPdfSUST Journal of Science and Technology, Vol. 21, No.1, 2014; P: 78-86 SUMono: A Representative Modern Bengali Corpus (Submitted: November 11, 2013; Accepted for Publication: January 21, 2014) Md. Abdullah Al Mumin, Abu Awal Md. Shoeb, Mohammad Reza Selim and M. Zafar Iqbal1, 2, 3, 4Department of Computer Science and Engineering, Shahjalal University of Science and Technology, Bangladesh Email: mumin-cse@sust.edu, shoeb-cse@sust.edu, selim@sust.edu, mzi@sust.eduAbstract The development of Language Engineering applications requires availability of sizable, reliable and representative corpora. However, such corpora are not routinely available for Bengali language. This paper introduces Shahjalal University Monolingual (SUMono) corpus, a representative modern Bengali corpus consisting of more than 27 million words, which is the largest of its kind. This paper describes how we have constructed SUMono corpus from available online and offline Bengali texts, with articles tagged as belonging to 6 domains: Natural Science, Social Science, Computer and IT, Literature, Mass Media and Blogs. We show some characteristics of Bengali language based upon the statistical analysis of this corpus. We also compare the 'inherent sparseness' of Bengali with English and Arabic by observing Type-to-Token ratio of the languages. We assess our corpus in terms of its representativeness, homogeneity and vocabulary growth rate using established techniques like Zipf's law, distribution of function words and Baayen's equation, respectively. We found that our corpus is balanced with respect to the frequency distribution as well as to the range of idiosyncratic phenomena. Key Words: monolingual corpora; representative corpus; modern Bengali; Bengali corpus; Zipf's law; 1. Introduction A corpus can be defined as 'a collection of texts assumed to be representative of a given language put together so that it can be used for linguistic analysis. [1] The importance of corpora to linguistic study is appreciated. A corpus to a linguist is very valuable because it allows statements to be made about language in very convincing fashion. The actual use of the corpus includes computational linguistics as well as studies in the grammar, lexicography, language variations, historical linguistics, language acquisition, and language pedagogy. It is now widely recognized that for most applications, a sufficiently large corpus reflecting the full range of domains and usage is essential. However, for Bengali, freely available corpora that meet these requirements do not exist. British National Corpus (BNC), the corpus for British English, is the first corpus that has been constructed keeping these requirements in mind. Later, the BNC</s>
<s>model has been followed in the construction of the American National Corpus, the Korean National Corpus, the Polish National Corpus, and the Russian Reference Corpus [2]. In this paper, we introduce a large-scale representative Bengali corpus, the SUMono corpus. The format and contents of the SUMono corpus follows the framework of the American National Corpus (ANC) [3]. Like ANC, the SUMono corpus does exhibit two criteria: First, it is broad, i.e., both large and well balanced. Second, it is available for the entire research community. These two properties make the SUMono corpus first of its kind for the Bengali language. The organization of the rest of the paper is as follows: section 2 reviews on the previous monolingual corpora for Bengali and justify the necessity of developing another Bengali corpus. Section 3 describes the development of the SUMono corpus. Section 4 focuses on some characteristics of Bengali language by analyzing various statistics obtained from the corpus. Section 5 performs some established experiments to assess the quality of the corpus and finally, section 6 concludes the paper. SUMono: A Representative Modern Bengali Corpus 79 2. Why a Bengali Corpus? The need for large scale representative corpus for natural language and speech is well established. There are many such corpora for English and many other European and Asian languages. However, such collections have not been constructed systematically for Bengali language. Most researchers in NLP and IR construct their own corpora that are usually small, special purpose, not representative and not publicly available. Central Institute of Indian Languages (CIIL) first introduce a Bengali corpus along with corpus of other nine Indian languages. The CIIL [4] corpus is a three million words corpus. Bharati et al. [5] analyzed and compared the data between Bengali and other Indian languages using the CIIL corpus. Although it has been designed to make sufficiently representative, the small size of the CIIL corpus is not sufficient for today's large-scale applications. Moreover, the differences in the writing style as well as the phonetic structure between Indian and Bangladeshi Bengali languages also show the necessity of developing our own corpus. 'Prothom-alo' [6] news corpus has been developed by collecting data from a Bangladeshi daily news paper, the 'Prothom-Alo', for the year 2005. Although the corpus contains a moderate size of more than 18 million words, the corpus is not representative of Bengali language. As they cited, Prothom-Alo being a news corpus is biased to some particular editing style while flexible in terms of new word type usage. This corpus may also not be a good source to create a language model. Moreover, the corpus is not available for the research community. Islam et al. [7] propose a method for building an effective corpus which can be used only for evaluation of Bengali text compression. Shamshed et al. [8] propose a method for building Bengali text corpus which is only designed for information retrieval system. In our expeiments, we develop the SUMono corpus as a large scale in size and sufficiently representative</s>
<s>for Bengali language. Table 1 depicts a comparison in size between SUMono corpus and the other Bengali corpora whose corpus statistics are available. Table 1: Comparison in Size Between SUMono and other Bengali Corpora SUMono ‘Prothom-alo’ CIIL Corpus size (in words) 27,118,025 18,100,378 3,044,573 Vocabulary size (no. of unique words) 571,572 384,048 190,841 3. Development of the SUMono Corpus The SUMono corpus project was initiated in 2010 with the aim of building a carefully designed corpus of 100 million words of Bangladeshi written and spoken Bengali language that generally follows the framework of the ANC. However, the first release of the SUMono corpus contains only written texts of more than 27 million words. In this section, we describe various aspects of design and construction of the corpus. 3.1 Representativeness The major issue that is addressed in design of SUMono corpus is its representativeness. According to Biber et al. [9], "representativeness refers to the extent to which a sample includes the full range of variability in a population." In other words, representativeness can be achieved through balancing and sampling of language or language variety presented in a corpus. SUMono corpus contains roughly 3,691 articles covering 6 broad subject categories. In addition, the articles are written by many authors from a variety of backgrounds and contain texts of different types (e.g., quantum mechanics vs fine arts). Besides, it also contains real life text in everyday use of Bengali that implies it has the sampling and representativeness property. Table 2 shows the category wise summary of the datasetLexical diversity score (i.e., token/type ratios) refer to the number of times each vocabulary item appears in the text on average. Table 2: Summary of the SUMono Dataset Subject Category No. of Articles Total Words Number of Distinct Words Lexical Diversity Number % Natural Science 683 1,711,179 6.31 101,088 16.93 Social Science 1,208 8,780,323 32.38 278,466 31.53 according to the data on November 1, 2013. 80 Md. Abdullah Al Mumin, Abu Awal Md. Shoeb, Mohammad Reza Selim and M. Zafar Iqbal Computer and IT 248 975,112 3.60 57,034 17.10 Literature 446 6,777,650 24.99 259,954 26.07 Mass Media 1,094 7,846,419 28.93 221,076 35.49 Blogs 12 1,027,342 3.79 79,002 13.00 -The Whole Dataset 3,691 27,118,025 100 571,572 47.44 3.2 Data Sources We have used texts from the following sources that are either publicly available or granted permission from respective copyright holders. • Books written in Bengali like 'Quantum Mechanics', 'Relativity Theory', 'Science and Math collections', 'Hundred interesting game of Science' and many others by Muhammed Zafar Iqbal; 'Some Questions about Function' by Dr. Rashed Talukder; Translated version of 'A Brief History of time'; Bengali version of NCTB books. • Online version of newspapers like Prothom-Alo, BDNews24.com, Bangladesh Pratidin, Daily JaiJaiDin, Daily Inqilab, Shaptahik, Shaptahik 2000. • Websites like comjagat.com, computerbarta.com, bigganschool.org, biggani.org, at-tahreek.com, natunpata.com, golpokobita.com, kaliokalam.com, wikipedia.com/bn • Social science articles usually written in Bengali from 'SUST Studies', a journal published by Shahjalal University of Science and Technology. • Bengali part of the SUPara [10] corpus. 3.3 Preprocessing</s>
<s>Since the individual sources of collected texts differ in many aspects, a lot of effort was required to integrate them into a common framework. The following steps have been applied as preprocessing on the documents. Cleaning: We start by cleaning up the original material that we collected from the different sources. This cleaning up means that the various formats, for example rtf, doc and pdf, are converted to plain text files. Tagged files like html and php files are normalized by deleting tags and then converted to plain text files. Encoding: We use simple principles for the encoding of documents in our corpus. The texts are encoded according to international standards by using UTF8 (Unicode). We have used Nikosh converter to encode all formats into Unicode. 3.3 Availability The corpus is available free of charge for educational and research purposes. However, the license agreement requires that the use of any statistical data must include a citation. The corpus is distributed through the Computer Science and Engineering (CSE) department of Shahjalal University of Science and Technology (SUST)4. Statistical Properties of Bengali Statistical inference allows the linguists to generalize from properties observed in a specific sample (corpus) to the same properties in the language as a whole. Statistical inference requires that the problem at hand is operationalized in quantitative terms, typically in the form of units that can be counted in the available samples [11]. This is the case we will concentrate on here now. Using the corpus of 442 MB size, we analyze some simple characteristics of Bengali initially. Character Level Analysis We begin with computing relative usage of Bengali characters. Table 3 shows the percentage of occurrence of each letter in the corpus. There are about 139,689,873 characters excluding spaces and punctuations in the corpus with the average of 5.15 letters per word. In ordinary English text, there are on the average about 4.5 letters per word [12]. English words form with only 5 vowels and 21 consonants whereas Bengali words form with 12 vowels, 20 allographs and 39 consonants making the Bengali word length longer. We see from the data that the first two mostly used letters are vowel allographs. The next most frequently used letter is the consonant ‘�’. The reason is besides its usual use in texts, ‘�’ also used in cluster formation texts as http://www.ecs.gov.bd/nikosh http://www.sust.edu/ SUMono: A Representative Modern Bengali Corpus 81 Table 3: Percentage of Occurrence of Each Letter in the Corpus letter % letter % letter % letter % letter % ◌� 10.514 � 02.210 � 01.098 � 00.388 �◌ 00.066 �◌ 08.767 02.157 ◌� 00.944 � 00.331 ◌ 00.041 � 08.346 ◌� 01.922 � 00.906 � 00.312 � 00.041 ◌� 05.953 � 01.666 � 00.760 ◌� 00.232 � 00.015 �◌ 05.538 � 01.492 � 00.740 � 00.224 � 00.015 � 05.238 �◌� 01.439 � 00.739 ◌� 00.211 � 00.008 04.776 ! 01.384 " 00.692 ◌# 00.197 $ 00.004 % 03.908 & 01.295 ' 00.690 ( 00.171 ) 00.004 * 03.901 + 01.246</s>
<s>, 00.664 - 00.163 . 00.002 / 02.983 0 01.179 1 00.629 2 00.094 3 02.954 4 01.146 ◌5 00.465 6◌ 00.088 7 02.868 8 01.138 9 00.444 : 00.086 ; 02.356 < 01.110 = 00.401 > 00.079 ‘�‘ (reph) and ‘ ‘ (ro-phola). Surprising to our intuition, the next most frequently used letter is ‘◌� ’ (hoshonto). While writing Bengali texts in paper, we barely use hoshonto but we use many clustered texts. We are not used to see or think hoshonto in those clustered texts while writing in paper. But in computation, each cluster form includes a hoshonto in its formation, which makes its count high. Table 4444 shows the percentage of occurrence of each letter that start a word, i.e., the word initial letter. It seems that most of the words starts with a consonant with ‘ ’ having the most of the times. Among the vowels ‘0’ and ‘4’ are used most of the times as the word initial letter. Table 5555 shows the frequency of top n-grams (sequences of letters) in the corpus. Space characters have been converted to ‘◊’ for legibility. Again, just n-grams up to 5 letters are shown. Table 4: Percentage of “Occurrence of Initial Letter of Words in the Corpus letter % letter % letter % letter % letter % 09.927 04.369 = 01.436 � 00.486 . 00.010 * 08.526 1 03.213 � 01.255 � 00.198 �◌ 00.006 3 08.275 � 02.760 < 01.189 � 00.168 ◌� 00.006 ; 07.330 & 02.662 7 01.154 > 00.131 ◌� 00.005 0 05.949 � 02.142 � 01.042 � 00.076 � 00.005 4 05.647 + 02.127 � 00.950 � 00.056 ( 00.004 � 04.918 8 02.083 � 00.848 � 00.034 �◌ 00.004 / 04.899 � 01.933 ! 00.661 � 00.026 � 00.003 � 04.645 " 01.677 ' 00.644 $ 00.015 2 00.002 % 04.424 , 01.480 - 00.587 ◌� 00.011 ) 00.002 Table 5: The top 10 Frequent n-gram Lletter in the SUMono Corpus 1-gram Freq. 2-gram Freq. gram Freq. 4-gram Freq. 5-gram Freq. ◌� 14686368 �◌ ◊ 4804831 �◌ � ◊ 1846104 ◊ ; ◌� � 612648 ◊ � �◌ ◊ 207833 �◌ 12246681 � ◊ 4409020 ◌� � ◊ 1308857 ◊ � �◌ 407788 ◌� �◌ � ◊ 181371 � 11658807 ◌� � 2677875 ◊ � 956265 �◌ � ◊ 371488 ◊ � �◌ �◌ 175475 ◌� 8315941 ◊ 2605799 ; ◌� � 714765 � �◌ � ◊ 256857 � �◌ �◌ ◊ 152824 �◌ 7735977 ◌� ◊ 2418849 % �◌ ◊ 651088 �◌ � �◌ ◊ 246167 ◊ 4 * ◌5 ◊ 146044 � 7316659 ◊ * 2211503 ◊ ; ◌� 631644 ◊ % ◌� � 230413 � ◊ ; ◌� � 144208 6671400 ◊ 3 2128266 �◌ ◊ 624038 % ◌� � ◊ 220896 ◊ ; ◌� � % 143449 % 5459011 �◌ � 2009212 ◊ % ◌� 591799 �◌ � ◊ 3 220828 ; ◌� � % �◌ 136476 * 5449812 ◊ ; 1912624 ◌� � ◌� 540597 � �◌</s>
<s>� ◊ 214613 ◊ % ◌� � ◊ 135278 / 4166883 ◌� � 1580520 � �◌ ◊ 529583 � �◌ ◊ 209045 % �◌ ◊ ; ◌� 126155 Word Level Analysis 82 Md. Abdullah Al Mumin, Abu Awal Md. Shoeb, Mohammad Reza Selim and M. Zafar Iqbal 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20Number of Let t ers1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20Number of Let t ersTable 6 shows n-letter high frequency words in the corpus. All words shorter than 20 letters were extracted for further calculations. However, just up to 5-letter words are depicted in the table. 1-letter words such as ‘ ’, ‘�’, ‘8’, ‘-’ etc. come from when we use these letters for numbering or indexing the texts in documents. Most of the valid long words are foreign words borrowed from English scientific terms such as ���CD�;�C���EC �3�C!F���C� (hydroperoxitetranoyek), 43G/��CH��C7I��E(stmicroelectronics). Table 7 shows the top 50 frequent words in the SUMono corpus. Table 6: The top 10 Frequent n-Letter Words in the SUMono Corpus letter Freq. 2-letter Freq. 3-letter Freq. 4-letter Freq. 5-letter Freq. , 236784 �� 295522 C� 258957 ��C 160359 3CK 75159 4 133844 4� 179105 4*5 147558 &�L 108655 %�C� 66305 � 9739 0� 127794 %�� 140863 0/�� 98923 �C�C< 64892 � 9207 �� 125377 �� 120207 �C% 98062 /C'L 64683 5871 �� 124109 �C* 111020 4 G 90443 �CM 51519 � 5497 4� 88853 0�/ 105736 �%�� 82045 �C+� 34607 8 4750 4 84180 �� 78874 � �C�� 79926 N�/ 33542 � 3910 �3 78758 �<7 73121 ��C� 73669 /���� 33258 - 3662 � 72067 �C� 70458 4 !� 69369 3� �� 32661 0 3009 *� 65258 0C< 70055 0/�� 67591 C�C< 30314 Table 7: The top 50 Frequent words in the SUMono Corpus Word % Word % Word % Word % Word % �� 1.09 �� 0.46 4 0.31 �C� 0.26 %� 0.23 C� 0.95 �� 0.44 �%�� 0.30 0C< 0.26 ��� 0.22 , 0.87 �C* 0.41 � �C�� 0.29 4 !� 0.26 �C� 0.21 4� 0.66 &�L 0.40 �� 0.29 0/�� 0.25 �3� 0.21 ��C 0.59 0�/ 0.39 �3 0.29 %�C� 0.24 � <� 0.21 4*5 0.54 0/�� 0.36 0/�C� 0.28 *� 0.24 1C� 0.20 %�� 0.52 �C% 0.36 3CK 0.28 �C�C< 0.24 %��� 0.20 4 0.49 4 G 0.33 ��C� 0.27 *C7 0.24 C�� 0.19 0� 0.47 4� 0.33 �<7 0.27 /�'L 0.24 �%� 0.19 �� 0.46 � <� 0.32 � 0.27 /C� 0.23 ��� 0.19 Figure 1a shows the number of distinct words (types) recognized in the corpus. As it is seen, most of the word types are 7 letters long in Bengali. Figure 1b depicts the total occurrence of n-letter words (tokens). From both figures, we see that though the number of 4-letter word type is quite low, we use them most often in Bengali. (a) The number of distinct n-letter</s>
<s>words (types) (b) The umber of total n-letter words (tokens) Figure 1: Distribution of Usage of n-Letter Words in SUMono Corpus SUMono: A Representative Modern Bengali Corpus 83 The above statistics may have different applications in different contexts. For example, post processing in Bengali OCR, Speech-to-Text, spell checker, etc. applications may employ above information to build a probabilistic model of the language and guessing words and letters in case of ambiguity in recognition. Inherent Sparseness 'Inherent sparseness' of a language compared to other languages can be measured by observing Type-to-Token Ratio (TTR) of the languages for identical text lengths in comparable genres. TTR measures the number of 'old' words we expect to see in running text before coming across a 'new' one. The ratio is easily calculated by dividing the total number of terms in a fragment by the number of distinct terms. From the perspective of statistical language processing, it is important to note that different languages appear to display different TTRs of what could be called 'inherent sparseness' [13]. In order to verify inherent sparseness of Bengali compared to English and Arabic, we picked sample sizes of 1 million words that allow us to compare Bengali and Arabic results with data reported for English on the Brown corpus. Table 8 shows the TTRs for fragments of different lengths from corpora of different language. The English Brown corpus data and Arabic Al-Hayat corpus data are taken from Sarkar et al. [13]. The TTR for the one-million English Brown corpus approximately equals 20.408 and for the Arabic Al-Hayat corpus of the same text length equals 8.252 whereas for Bengali it is 15.859. The finding invites the conclusion that Bengali textual data may be inherently sparser than English and quite inherently denser than Arabic. This suggests that, for some statistical applications, Bengali corpora may need to be significantly larger than English ones and significantly smaller than Arabic ones for similar effect. Table 8: Type-to-Token Ratios for Corpora Fragments of Different Lengths of Different Language Text Length Bengali (SUMono) English (Brown) Arabic (Al-Hayat) 100 1.204 1.449 1.190 1600 1.913 2.576 1.774 6400 2.455 4.702 2.357 16000 2.985 5.928 2.771 20000 3.244 6.341 2.875 1000000 15.859 20.408 8.252 5. Assessment of the Corpus In this section, we adopt two rough, but computationally cheap techniques [14] for a-priori profiling of corpus quality. First, we check for obvious imbalances by tracking term distribution patterns against Zipf's law. Second, we trace the behavior of the function words to measure homogeneity of the corpus. In addition, we study the vocabulary growth rate of the corpus. 5.1 Zipf's Distribution Zipf's law is useful as a rough description of the frequency distribution of words in human languages [15]. Set against Zipf's law, frequency distribution in an actual dataset is also a reasonable way to gauge data sparseness, and can provide evidence of imbalance in a sample. Zipf's law draws a relationship between the frequency of a word f and its position in the list, known as its rank r.</s>
<s>The law states that: r.f = c, where r is the rank of a word, f is the frequency of occurrence of the word, and c is a constant that depends on the text being analyzed. Word frequencies have been counted for all the domains separately, and for the whole dataset. In all, seven lists of word frequencies were created. Each was sorted in descending order of frequency. Rank was assigned and the sorted lists were plotted against rank. Table 2 shown in section 3 is a summary of all the data used in our experiments. Figure 2 shows the results of the plots on logarithmic scale. 84 Md. Abdullah Al Mumin, Abu Awal Md. Shoeb, Mohammad Reza Selim and M. Zafar Iqbal Figure 2: Zipf's Curve for All Six Domains and Whole Dataset According to Zipf's law, for a representative sample the graphs should be a straight line with slope -1. In practice, this may not be the case because many words will have the same frequency but be assigned different rank. As expected, graphs improved as the size of data increased, and the proportion of rare words declined. The analysis of the graph shows that term distribution in whole dataset or in each subject area fits Zipf's law comfortably. As a result, we can believe that the dataset is balanced, either overall, or for each subject area. 5.2 Behavior of Function Words Function words are words whose purpose is more to signal grammatical relationship in a sentence than to convey lexical meaning. In the context of information retrieval, function words are not so informative because of their very frequent occurrence in all documents. However, the occurrence and distribution of frequent words has some value in assessing corpus quality. In a balanced collection, the function words will tend to distribute more homogeneously than content words, whose occurence is "bursty" [16]. Hence, we investigate the distribution of very frequent terms in SUMono corpus by dividing the corpus into three chunks and observing whether the function words occur very frequently in each chunk. We take two domains for each chunk and make a frequency analysis for each chunk. Table 9 shows the top 10 frequent words for three chunks of SUMono corpus. We observe that most 10-20 frequent words are same for each chunk and also same for the whole dataset (Table 7) only with difference in their rank. Thus we conclude that, in this corpus, very frequent terms distribute more homogeneously than less frequent terms. Table 9: The Dstribution of Function Words in the SUMono Corpus CHUNK1 (Natural Science, Mass Media) CHUNK2 (Social Science, Blogs) CHUNK3 (Computer and IT, Literature) Word % Word % Word % , 0.95 , 1.08 �� 1.40 �� 0.91 �� 1.01 C� 1.07 C� 0.91 C� 0.91 0� 0.74 4� 0.70 4� 0.67 %�� 0.67 ��C 0.62 4*5 0.67 0�/ 0.64 4 0.60 ��C 0.60 0/�� 0.63 4*5 0.58 �� 0.53 4� 0.60 �� 0.57 �� 0.52 ��C 0.55 �� 0.51 4 0.52 , 0.53</s>
<s>%�� 0.45 �� 0.50 �3 0.51 SUMono: A Representative Modern Bengali Corpus 85 5.3 Vocabulary Growth The statistical models of Baayen [17] link the degree of productivity of a morphological process to the rate of vocabulary growth, i.e., to how frequently new word types that are formed by the process are encountered when an increasing amount of text is sampled. If the degree of productivity changes over time, there should be a corresponding changes in the vocabulary growth rate [18]. Baayen shows that the growth rate of the vocabulary, the rate at which the vocabulary size increases as sample size increases, can be estimated as follows: G = V(1) / N. In this equation, V(1) is the number of words occurring once (hapax legomena) in a sample size N. In the Brown corpus, G = 24375/996883 = 0.024, indicating that the vocabulary size is still growing at a relatively fast pace. The vocabulary is still growing (although at a slower pace) in much larger corpora, such as the written section of the BNC (G = 0.003) [19]. Figure 3 shows the vocabulary growth curve for the SUMono corpus. The vocabulary growth rate for SUMono corpus is, G = 273617/27118025 = 0.01, indicating that the vocabulary size in SUMono corpus is still growing at a relatively medium pace. 1002003004005006000 5 10 15 20 25 30N (in Million)Figure 3: The SUMono Corpus Vocabulary Growth Curve: Number of Types (circles) and Hapax Legomena (triangles) for 27 Increasingly Larger Token Samples (N) 6. Conclusion and Future Work In this paper, we have presented the SUMono corpus, a large-scale collection of representative Bengali texts. The corpus which consists of 27,118,025 words in Bengali is the largest available Bengali corpus. We have presented statistics of SUMono corpus which help us to study some properties of Bengali language. Findings from these corpora-based studies on Bengali will help develop more Bengali friendly and efficient word processors, OCR systems, search engines and similar other widely used applications. We have compared inherent sparseness of Bengali with English and Arabic and concluded that Bengali data is sparser than English and much denser than Arabic. In its design, SUMono corpus is made representative by integrating a variety of text materials from different domain. We have investigated the balance of the corpus by checking Zipf distribution, over each of the sample domains as well as over the dataset as a whole. We have also investigated homogeneity by checking distribution of the function words in the corpus and observed the vocabulary growth rate using Baayen's equation. On the whole, we can suggest that the dataset is significantly balanced either with respect to frequency distribution, or with respect to the range of idiosyncratic phenomena. In this sense, the corpus is useful as a background for the development of techniques. In future, we plan to integrate spoken data as well as enlarge the corpus further to achieve a corpus of 100 million words of written and spoken language. We would like to annotate SUMono corpus on</s>
<s>various levels up to deep syntactic layer. We hope that the SUMono corpus will function as the basic source of reference for both national and international researchers who are willing to do their computational research on Bengali language processing. 86 Md. Abdullah Al Mumin, Abu Awal Md. Shoeb, Mohammad Reza Selim and M. Zafar Iqbal References [1] Tognini-Bonelli, E., 2001. Corpus Linguistics at Work. Amsterdam/Philadelphia: John Benjamins Publishing Company. [2] McEnery, Tony, Xiao, R. and Tono, Y., 2006. Corpus-based Language Studies, Routledge. [3] American National Corpus website: http://americannationalcorpus.org [4] Dash, N. S., and Chaudhuri, B. B., 2001. Corpus based Empirical Analysis of Form, Function and Frequency of Characters used in Bangla. Special Issue of the Proceedings of the Corpus Linguistics Conference; 13:144-157. [5] Bharati, A., Sangal, R. and Bendre, S.M., 1998. Some Observations Regarding Corpora of Some Indian Languages. In Proceedings of the International Conference on Knowledge Based Computer System (KBCS-98), NCST, Mumbai. [6] Majumder, K.M.Y., Islam, M.Z. and Khan, M., 2006. Analysis of and Observations from a Bangla News Corpus. In Proceedings of 9th International Conference on Computer and Information Technology, ICCIT 2006, pp. 520-525. [7] Islam, M.R. and Rajon, S.A.A., 2010. Design and Analysis of an Effective Corpus for Evaluation of Bengali Text Compression Schemes. Journal of Computers, Vol. 5, No. 1. [8] Shamshed, J. and Karim, S.M.M., 2010. Novel Bangla Text Corpus Building Method for Efficient Information Retrieval. JCIT, ISSN 2218-5224, Vol. 1, Issue 1. [9] Biber and Douglas, 1993. Representativeness in corpus design. Literary and Linguistic Computing 8: 243-257. [10] Mumin, M.A.A., Shoeb, A.A.M., Selim, M.R. and Iqbal, M.Z., 2012. SUPara: A Balanced English-Bengali Parallel Corpus. SUST Journal of Science and Technology, Vol. 16, No.2; pp. 46-51. [11] Trento, M.B. and Osnabr\v{u}ck, S.E. Statistical Methods for Corpus Exploitation. Corpus Linguistics: An International Handbook, pp. 777-803. [12] Pierce, J.B., 1980. An Introduction to Information Theory: Symbols, Signals and Noise. Dover Publications [13] Sarkar, A., De Roeck, A. and Garthwaite, P. 2004. Easy Measures for Evaluating non-English Corpora for Language Engineering: Some Lessons from Arabic and Bengali. Technical Report No: 2004/05, Open University - Department of Computing. [14] Goweder, A. and De Roeck, A., 2001. Assessment of a significant Arabic corpus. Proceedings Workshop on Arabic Language Processing, 39th ACL. Toulouse. [15] Manning, C. and Schuetze, H., 1999. Foundations of Statistical Natural Language Processing. MIT Press. Cambridge, MA. [16] Katz, S., 1996. Distribution of content words and phrases in text and language modeling. Natural Language Engineering, 2(1):15-59 [17] Baayen, R.H., 2001. Word Frequency Distributions. Dordrecht: Kluwer [18] Linguistic Evidence: Empirical, Theoretical and Computational Perspectives. edited by Stephan Kepser, Marga Reis, pp-357 [19] Trento, M.B. Distributions in text. Corpus Linguistics: An International Handbook, pp. 803-822. [20] Darrudi, E. et al., 2004. Assessment of a Modern Farsi Corpus. In Proceedings of the 2nd Workshop on Information Technology and Its Disciplines, pp.73-77, Kish Island, Iran. View publication statsView publication statshttps://www.researchgate.net/publication/332318483</s>
<s>Automatic Keyword Extraction from Bengali Text Using Improved RAKE Approach2018 21st International Conference of Computer and Information Technology (ICCIT), 21-23 December, 2018 978-1-5386-9242-4/18/$31.00 ©2018 IEEE Automatic Keyword Extraction from Bengali Text using Improved RAKE Approach Mozammel Haque Dept. of Computer Science and Engineering Britannia University Cumilla, Bangladesh bappy.mozammel@gmail.com Abstract—Keyword extraction refers to the identification of words or short phrases that concisely describe the contents of the document. Rapid Automatic Keyword Extraction (RAKE) is a well-known keyword extraction approach. But we found that RAKE fails to extract the significant Bengali keywords. In this paper, we have proposed an improved version of the pristine RAKE called RAKEB. We have also shown that RAKEB works significantly well for Bengali than the pristine RAKE. Keywords— RAKE, Keyword Extraction, Bengali, RAKEB, Natural Language I. INTRODUCTION Keyword extraction is the automatic identification of words or short phrases that concisely describe the contents of a text [1]. Rapid Automatic Keyword Extraction (RAKE) is one of the most popular and well-known keyword extraction models. Stuart Rose et al. had developed the RAKE in 2000 [2]. This approach is now a widely used NLP technique for English and similar structure language. In 2018, S. Siddiqi and A. Sharan found some weakness of RAKE for Hindi language and suggest few modified scoring techniques [3]. The grammatical structure of Bengali is different from English as well as very complicated [4]. We found that the pristine RAKE model fails to extract keywords from Bengali because the consequential smaller keywords would unavoidably have lower scores than longer keywords. As a result, the original algorithm needs to be amended for extracting keywords from Bengali. In this paper, we have proposed an amended version of RAKE called RAKEB (Rapid Automatic Keyword Extraction for Bengali) that is felicitous for the keyword extraction from Bengali. We have used a list of 398 Bengali stopwords for the experiment of both RAKE and RAKEB [5]. We have analyzed both approach of RAKE as well as RAKEB and show that RAKEB is a better approach than RAKE for the Bengali. The rest of the paper is organized as follows. Section II introduces the structure of RAKE model as well as explains the motivation for this study. The proposed approach has been presented elaborately in Section III. In Section IV, We present the experimental results and its analysis. Section V includes limitation and our future plan. We conclude our proposed work in Section VI. II. RAKE DESCRIPTION Keywords are words or short phrases that concisely describe the contents of a document. RAKE is an automatic keyword engendering approach. The algorithm is as follows [6]. 1. Split the text document into a list of words by breaking it at word delimiters (like spaces and punctuation). 2. Split the obtained list of words into sequences of contiguous words by breaking each sequence at the stopwords. Each sequence is now called a “candidate keyword”. 3. Calculate the “score” of each individual word from the list of candidate keywords. 4. For each candidate keyword, add</s>
<s>the word scores of its constituent words to calculate the candidate keyword score. 5. Take the first one-third top scoring candidates from the list of candidates as the final list of keywords. A. Candidate keyword Scoring using RAKE A candidate keyword of RAKE may have multiple words and summation of each word score will generate the keyword score. The score of each word is obtained by dividing the degree of the word by frequency of that word. Suppose a word w occurs in five candidate keywords, where w1, w2, w3, w4, w5, w6 and w are the distinct words of those keywords. The candidate keywords are listed in below: i) w1 w2 w ii) w iii) w5 w iv) w2 w3 w w4 v) w w6 The entire process of scoring a word is explained in below [3], [6]. 1. Frequency of w is the occurrence of w occurs in a document. So the frequency of w with respect to above keywords is 1 + 1+1 + 1 +1= 5 i.e. Frequency (w) = 5 2. Degree of the word w measures the number of words with which a particular word w occurs in the candidate keywords. Here, need to count the number of words that occur in candidate keywords containing w, including w itself to find the Degree of word w. So, the degree of word w is 3 + 1+2 + 4 + 2 = 12 i.e. Degree (w) = 12 3. Degree(w)/frequency(w) will provide the score of word w. So, word_score (w) = 12/5= 2.4 4. For a candidate keyword of multiple words, add the word scores of its constituent words to find the candidate keyword score. For instance, score of candidate keyword “w1 w2 w3” is score(w1 w2 w3) = word_score(w1) + word_score(w2) + word_score(w3). B. Motivation to Modify RAKE RAKE is a well-known NLP technique, but its application depends on few factors like the language in which the text is written. We modify the RAKE considering following two issues. Firstly, a single-word keyword unavoidably has lower scores in RAKE than multi-word keyword because there is a simple addition of individual word scores for the multi-word keywords. RAKE generate candidate keywords list from a document by breaking each sequence of the word at the stopwords. But Bengali stopwords may not always found as the word while making a sentence, because it can be adjacent with another word. The following text is the topmost extracted keyword using RAKE from ভাষার-মনীষা [7]. “িবেদশীয় ভাষার সাহােযয্ jান িবjােনর চচর্ ার মতন সৃি ছাড়া pথা” In the above text, no stopword be found as word but can be found as the substring of another word. For instance, in the word “িবjােনর”, the stopword “eর” is adjacent with the word “িবjান” and create a new word “িবjােনর”. As a result, RAKE cannot break the text at this stopword but generates high score because of having more words. Secondly, RAKE ranked the multiple candidate keywords with the same score where each of</s>
<s>them consists of same words but not in the same order. For instance, “X Y Z”, “Y X Z” and “Z X Y” are the candidate keywords that found from the same text. They all are not equally important as keywords but RAKE generates the same score for all of the above candidate keywords [3]. Table I shows the top ten extracted keywords and the respective score using RAKE from Bengali article “ভাষার-মনীষা”. The result shows that RAKE extracts longer keywords rather than an important one. To overcome this problem, we proposed the keyword-length normalization version of RAKE called RAKEB. RAKEB is also suitable to overcome the second limitation of RAKE as we have described above. TABLE I. TOP TEN EXTRACTED KEYWORDS FROM “ভাষার-মনীষা” USING RAKE Sl. Keywords Score িবেদশীয় ভাষার সাহােযয্ jান িবjােনর চচর্ ার মতন সৃি ছাড়া pথা 77.63 ‘পািকsােনর রা ভাষা সমসয্া’ শীষর্ক pবেn শহীদlুাh িলেখেছন ‘বাংলােদেশর েকাটর্ 70.48 েফেলেছ পূবর্ পািকsান সািহতয্ সেmলেন সভাপিতর aিভভাষেণ মুহmদ শহীদlুাh 63.49 সময় pাপয্ তেথয্র aভােব pjােবােধর সাহােযয্ ‘iনফােরn ’ 62.00 পূবর্ পািকsােনর ভাষার আদশর্ aিভধান pকেlর সmাদক িহেসেব 59.88 েরামান হরেফর pবতর্ নেক মুহmদ শহীদlুাh aতয্n প াdগামী পদেkপ 58.24 কেয়কবার aংশ িনেয়েছন বাংলা িলিপপdিত িনেয়o েভেবেছন 46.82 বাংলা িব িবদয্ালেয়র pধান ভাষার sান aিধকার কিরেব 46.69 9 কােজর বণর্নার মধয্ িদেয় শহীদlুাh সmেকর্ পুেরাপুির 44.31 মুসলমান হoয়ার কারেণ ঢাকা িব িবদয্ালেয়র িশkেকর চাকিরেত 44.25 III. RAKEB DESCRIPTION The original RAKE approach fails to extract significant keywords from Bengali. In this paper, we have modified the approach of RAKE and proposed an improved version of RAKE that is called RAKEB. RAKEB is specially designed for the Bengali. The modification is actually done on the scoring measure of keywords and the rest of the steps remain same in RAKEB as like as the RAKE. A. Candidate keyword Scoring using RAKEB RAKEB does not rank a candidate keyword by simply adding up the word score as like as the pristine RAKE. The scoring of a candidate keyword is done using (1). = ∑ ∗ 1 Here, K = Candidate Keyword KS = Score of the keyword “K” Dw = Degree for each word w in “K” which measures the number of words with which a particular word w occurs in the candidate keywords. Fw = Frequency of each word w in “K” which simply count of the number of times w occurs in the document. O = Occurrence of the “K” as substring in the document. N = Number of the words in “K” Table II shows the top ten extracted keywords and the respective score of the keywords using RAKEB from Bengali article “ভাষার-মনীষা” [7]. TABLE II. TOP TEN EXTRACTED KEYWORDS FROM “ভাষার-মনীষা” USING RAKEB Keywords (K) Dw Fw O N = ∑ ∗শহীদlুাh 77 16 4.81 18 1 86.63 ভাষা 10 5 2 26 1 52 বাঙািল 20 5 4 9 1 36 মসুলমান 12 3 4 8 1 32 মহুmদ শহীদlুাh 38, 16 10.24 6 2 30.72 রা ভাষা 28 4 7 4 1 28 দীঘর্ 16 2</s>
<s>8 2 1 16 বাংলার 15 4 3.75 4 1 15 সািহতয্ 14 3 4.67 3 1 14 মানষু 7 2 3.5 4 1 14 B. Potency of RAKEB RAKE produces high degree score for a multi-word keyword. The RAKEB normalizes this length issue and finds the more frequent and significant keywords using proposed (1). RAKEB solves both limitations of RAKE, as we have described in Section II.B Firstly, RAKEB does not rank a candidate keyword by simply adding up the score of its word as like as RAKE rather it finds the average score for a multi-word keyword. Furthermore, the occurrence of the candidate keyword (O) helps to find out more frequent keywords in RAKEB, where pristine RAKE unable to do this. For instance, the keyword “িবjান” will surely get more score for the frequency of all its different form like “িবjােনর”. Secondly, it is evident that candidate keywords “X Y Z”, “Y X Z” and “Z X Y” are not equally important. But RAKE ranked them equally in this case. RAKEB solve this problem and produce the distinct score for each of the above keywords except each of them present in the document equally because RAKEB scored a keyword along with its occurrence (O). IV. EXPERIMENTAL RESULT The proposed approach is implemented and tested using Visual C# programming language on a laptop (Visual Studio 2012 [Windows Forms Application], 64-bit windows 7 OS, 2.4 GHz CPU, 4 GB RAM). Total four newspaper articles are used for testing. The articles are collected from the Bengali newspaper “Prothom Alo”. These articles are listed in below: 1. ভাষার-মনীষা [7] 2. িতিমরিবনাশী-সংgাহক [8] 3. aমৃেতর পুt [9] 4. বাঙািল সংsৃিতর pকৃত সাধক [10] Both RAKE and RAKEB split the obtained list of words (from the document) into sequences of contiguous words by breaking each sequence at the stopwords to create candidate keywords. We have used a list of 398 Bengali stopwords for this purpose [5]. Table III shows the top ten keywords using both RAKEB and RAKE model. The result clearly shows that RAKE extracts lengthy text as the keyword because of not using the length-normalization method. Furthermore, most of them are insignificant. On the other hand, RAKEB works significantly well for Bengali. RAKEB is able to extract frequent, meaningful as well as significant keywords, because of having length normalization and improved-scoring architecture. TABLE III. RESULT COMPARISON BETWEEN RAKE AND RAKEB Article Name Keyword Extraction using RAKEB (Proposed Approach) Keyword Extraction using original RAKE ভাষার-মনীষা শহীদlুাh, ভাষা, বাঙািল, মসুলমান, মহুmদ শহীদlুাh, রা ভাষা, দীঘর্, বাংলার, সািহতয্, মানষু [Note: comma separates the keywords] িবেদশীয় ভাষার সাহােযয্ jান িবjােনর চচর্ ার মতন সৃি ছাড়া pথা, ‘পািকsােনর রা ভাষা সমসয্া’ শীষর্ক pবেn শহীদlুাh িলেখেছন ‘বাংলােদেশর েকাটর্ , েফেলেছ পূবর্ পািকsান সািহতয্ সেmলেন সভাপিতর aিভভাষেণ মহুmদ শহীদlুাh, সময় pাপয্ তেথয্র aভােব pjােবােধর সাহােযয্ ‘iনফােরn ’, পূবর্ পািকsােনর ভাষার আদশর্ aিভধান pকেlর সmাদক িহেসেব, েরামান হরেফর pবতর্ নেক মহুmদ শহীদlুাh aতয্n প াdগামী পদেkপ, কেয়কবার aংশ িনেয়েছন বাংলা িলিপপdিত িনেয়o েভেবেছন, বাংলা িব িবদয্ালেয়র pধান ভাষার sান aিধকার</s>
<s>কিরেব, কােজর বণর্নার মধয্ িদেয় শহীদlুাh সmেকর্ পুেরাপুির, মসুলমান হoয়ার কারেণ ঢাকা িব িবদয্ালেয়র িশkেকর চাকিরেত [Note: comma separates the keywords] িতিমরিবনাশী-সংgাহক ভাষা, আবদলু কিরম, পঁুিথর, সংsৃিত, চ gাম, বাংলা সািহতয্, মাতৃভাষা, জাতীয় ভাষা, gােমর, মাdাসা aসংখয্ কািহিন েকcা গীত গাথা পালার মলূয্ সািহেতয্র iিতহােসর িদক, ‘pায় 400 বছেরর সািহিতয্ক িনদশর্ন বাংলা সািহেতয্র iিতহােস sান েপেয়েছ, সংsৃিতর যথাথর্ iিতহাস রচনায় আবদলু কিরম সািহতয্িবশারেদর (1871 1953) utরািধকার, েডেক পাঠােনার আ াস িদেল ‘িkিতেমাহন গরম হেয় েগেলন aবাকo হেয়িছেলন, দkতার েণi dত eতকােলর aপাঙ্েkয় তামািদ সৃি র পুন jীবন ঘটল, ’ (সািহিতয্ক মাহববু uল আলম) gােমর দিরd গৃহsিট কীভােব বয্িkমাt, 1951 সােল চ gােম aনিু ত সংsৃিত সেmলেন মলূ সভাপিতর ভাষেণ, মধয্যুেগর েদড়শতািধক কিবেক আিব ােরর কৃিতt তাঁর’ (ড মাহববুলু হক), ‘16i মাচর্ 1951 সােলর চ gাম সংsৃিত সেmলেন pদt সািহতয্িবশারদ, আবদলু কিরম আজীবন িনরবিcnভােব হােত েলখা পুেরােনা পঁুিথ সংgহ aমেৃতর পুt জািহদ ভাi, মন, কথা, বল, যায়, জািহদ ভাiেয়র, মােন, পাগল, মনুমনু আপা, বলেছন শতিছn কাপড়েচাপড় গালভিতর্ দািড় েগাঁফ জট পাকােনা চুল আপনমেন িবড়িবড়, েবিরেয় eকটা চাকিরেত ঢুেকেছন জািহদ ভাi েশষ বেষর্র পরীkার, জায়গায় eকটা াক রং সাiড িদেয় আসিছল dতেবেগ, ছাড়া eকটু কান পাতেলi েশানা যায় পাগলরা েকবল বতর্ মান, পেড় িনেয়িছেলন কেয়কিট লাiন ‘জীিবেতর েশাক মতৃরা gহণ, বািড়oয়ালার ভাড়া বািক পড়েছ জািহদ ভাi বাসায় তালা েমের, বড়জন মােন শােহদ ভাi পড়েতন pেকৗশল িব িবদয্ালেয় sাপতয্িবদয্ায়, ডাsিবেনর পােশর ফুটপােত দাঁিড়েয় বkৃতার ভি েত িচৎকার, িনেষধাjার সময় সভেয় সের দাঁড়ােনার সময় কথা বলার সময়, েবড়ােত লাগেলন—eকাi pয্াকাডর্ িলেখ দাঁড়ােত লাগেলন েpসkােবর বাঙািল সংsৃিতর pকৃত সাধক সংsৃত, বাংলার, িহn,ু বাংলা সািহেতয্র, পূবর্ব , বাংলা ভাষা, aনরুােগর, kিমlা, ভাষার pকাের pেবশ লাভ কিরল bাhণগণ iহােক িক প ঘৃণার চেk েদিখেতন, েগাঁড়া িহn ুসমােজর uৎপীড়েন iহারা sতঃpবৃt হiয়া isােমর আ য় gহণ, সািহতয্ (1896) রচনাকােল দীেনশচnd েসনিছেলন kিমlা িভেkািরয়া sুেলর pধান িশkক, pিতিনয়ত সংgামমখুর েতমিন সহজ সরল জিটলতামkু uদার েচতনায় sc, pাচীন পঁুিথ আিব ােরর কিঠন েম bতী হoয়ার েpরণা েজাগায়, uিনশ শতেকর েশষােধর্ বাঙািলর জাতীয় মানেস uপিনেবশবাদী িচnার িবপরীেত, িনmবগর্ীয় বা াল জীবেনর সমdৃ সংsৃিত বাঙািল সংsৃিত িনেয়i গবর্ aনভুব, pাচীন বা লা সািহেতয্ মসুলমােনর aবদান (1940) শীষর্ক gn, বাংলার পূবর্ a েল aিধক হাের িনmবগর্ীয় aনাযর্ জনেগা ীর বসবাসসূেt, েপেরেছ বাংলা সািহেতয্র সবেচেয় ধমর্াcnতামkু মানিবক েpেমর আখয্ানমলূক গীিতকাসমহূ Table IV, Table V and Table VI show the score of the top ten extracted keywords using RAKEB from Bengali text. TABLE IV. EXTRACTED KEYWORDS FROM িতিমরিবনাশী-সংgাহক Keyword (K) O N KS(K) ভাষা 4.2 22 1 92.4 আবদলু কিরম 16.78 9 2 75.5 পঁুিথর 7.17 6 1 43 সংsৃিত 6.33 6 1 38 চ gাম 5.67 5 1 28.33 বাংলা সািহতয্ 14.25 3 2 21.38 মাতৃভাষা 3 5 1 15 জাতীয় ভাষা 7 4 2 14 gােমর 4.67 3 1 14 মাdাসা 6.5 2 1 13 TABLE V. EXTRACTED KEYWORDS FROM aমেৃতর পুt Keyword (K) O N KS(K) জািহদ ভাi 8.44 36 2 151.94 মন 2.2 41 1 90.2 কথা 3.1 23 1 71.19 বল 1 62 1 62 যায় 3.5 16 1 56 জািহদ ভাiেয়র 7.88 12 2 47.29 মােন 3.9 11 1 42.9 পাগল 3 14 1 42 মনুমনু আপা 10.57 7</s>
<s>2 37 বলেছন 3.25 9 1 29.25 TABLE VI. EXTRACTED KEYWORDS FROM বাঙািল সংsৃিতর pকৃত সাধক Keyword (K) O N KS(K) সংsৃত 4.5 11 1 49.5 বাংলার 5.13 8 1 41 িহn ু 5.67 4 1 22.67 বাংলা সািহেতয্র 10.45 4 2 20.9 পূবর্ব 3.33 6 1 20 বাংলা ভাষা 8.7 4 2 17.4 aনরুােগর 7.5 2 1 15 kিমlা 5.5 2 1 11 ভাষার 2.33 4 1 9.33 V. LIMITATIONS AND FUTURE WORK Our proposed RAKEB is designed to extract the keyword from Bengali Text. We do not find this is very problematic. This Improved approach works well in the large document rather than in a very small document, but accuracy in the small document is much better than the original RAKE. In (1), “O” is a substring of the given document. In this case, few non-significant keywords get more score than significant keywords, but the amount is negligible as this frequency helps to find out informative and significant keywords in the top list. RAKEB is the keyword scoring-measure modification of RAKE which is specially designed for the Bengali. The original RAKE does not work well for Bengali. We have a plan to modify the approach of RAKE so that we can extract the keyword from a large amount of language using a unique RAKE model. VI. CONCLUSION RAKE is an automatic keyword extracting approach. The original RAKE algorithm fails to extract the keyword from Bengali as the structure of Bengali is very complicated and much different from English. In this paper, we have proposed a modified version of the RAKE called RAKEB which is specially designed for the Bengali. We have used four Bengali articles for experiments and shown the experimental results for both of RAKE and RAKEB. It shows that RAKEB works significantly well for Bengali than the original RAKE. We hope that RAKEB will be useful in the various field of computational linguistics. REFERENCES [1] S. Beliga, M. Ana, and S. Martinčić-Ipšić, “An Overview of Graph-Based Keyword Extraction Methods and Approaches,” J. Inf. Organ. Sci., vol. 39, no. 1, pp. 1–20, 2015. [2] S. Rose, D. Engel, and N. Cramer, “Automatic Keyword Extraction from Individual Documents,” in Text Mining: Applications and Theory, 2010, pp. 1–20. [3] S. Siddiqi and A. Sharan, “Improved RAKE Models to Extract Keywords from Hindi Documents,” Inf. Syst. Des. Intell. Appl. Adv. Intell. Syst. Comput., pp. 472–483, 2018. [4] M. Haque and M. N. Huda, “Relation between Subject and Verb in Bangla Language : A Semantic Analysis,” Int. Conf. Informatics, Electron. Vis., pp. 41–44, 2016. [5] “Bengali stopwords collection.” [Online]. Available: https://github.com/stopwords-iso/stopwords-bn/blob/master/stopwords-bn.txt. [Accessed: 22-Jun-2018]. [6] “Keyword Extraction using RAKE.” [Online]. Available: https://codelingo.wordpress.com/2017/05/26/keyword-extraction-using-rake/. [Accessed: 21-Jul-2018]. [7] "ভাষার মনীষা" [Online]. Available: http://www.prothomalo.com/ special-supplement/article/1470321/ভাষার-মনীষা. [Accessed: 24- Jun- 2018]. [8] "িতিমরিবনাশী সংgাহক", [Online]. Available: http://www.prothomalo.com/special-supplement/article/1470331/িতিমরিবনাশী-সংgাহক. [Accessed: 24- Jun- 2018]. [9] "aমেৃতর পুt", [Online]. Available: http://www.prothomalo.com/special-supplement/article/1470296/aমেৃতর-পুt. [Accessed: 24- Jun- 2018]. [10] "বাঙািল সংsৃিতর pকৃত সাধক", [Online]. Available: http://www.prothomalo.com/special-supplement/article/1470301/বাঙািল-সংsৃিতর-pকৃত-সাধক. [Accessed: 24- Jun- 2018]. /ASCII85EncodePages false /AllowTransparency false /AutoPositionEPSFiles true /AutoRotatePages /None /Binding /Left /CalGrayProfile (Gray Gamma 2.2) /CalRGBProfile</s>
<s>(sRGB IEC61966-2.1) /CalCMYKProfile (U.S. Web Coated \050SWOP\051 v2) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Error /CompatibilityLevel 1.7 /CompressObjects /Off /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages true /CreateJobTicket false /DefaultRenderingIntent /Default /DetectBlends true /DetectCurves 0.0000 /ColorConversionStrategy /LeaveColorUnchanged /DoThumbnails false /EmbedAllFonts true /EmbedOpenType false /ParseICCProfilesInComments true /EmbedJobOptions true /DSCReportingLevel 0 /EmitDSCWarnings false /EndPage -1 /ImageMemory 1048576 /LockDistillerParams true /MaxSubsetPct 100 /Optimize true /OPM 0 /ParseDSCComments false /ParseDSCCommentsForDocInfo true /PreserveCopyPage true /PreserveDICMYKValues true /PreserveEPSInfo false /PreserveFlatness true /PreserveHalftoneInfo true /PreserveOPIComments false /PreserveOverprintSettings true /StartPage 1 /SubsetFonts true /TransferFunctionInfo /Remove /UCRandBGInfo /Preserve /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true /AbadiMT-CondensedLight /ACaslon-Italic /ACaslon-Regular /ACaslon-Semibold /ACaslon-SemiboldItalic /AdobeArabic-Bold /AdobeArabic-BoldItalic /AdobeArabic-Italic /AdobeArabic-Regular /AdobeHebrew-Bold /AdobeHebrew-BoldItalic /AdobeHebrew-Italic /AdobeHebrew-Regular /AdobeHeitiStd-Regular /AdobeMingStd-Light /AdobeMyungjoStd-Medium /AdobePiStd /AdobeSongStd-Light /AdobeThai-Bold /AdobeThai-BoldItalic /AdobeThai-Italic /AdobeThai-Regular /AGaramond-Bold /AGaramond-BoldItalic /AGaramond-Italic /AGaramond-Regular /AGaramond-Semibold /AGaramond-SemiboldItalic /AgencyFB-Bold /AgencyFB-Reg /AGOldFace-Outline /AharoniBold /Algerian /Americana /Americana-ExtraBold /AndaleMono /AndaleMonoIPA /AngsanaNew /AngsanaNew-Bold /AngsanaNew-BoldItalic /AngsanaNew-Italic /AngsanaUPC /AngsanaUPC-Bold /AngsanaUPC-BoldItalic /AngsanaUPC-Italic /Anna /ArialAlternative /ArialAlternativeSymbol /Arial-Black /Arial-BlackItalic /Arial-BoldItalicMT /Arial-BoldMT /Arial-ItalicMT /ArialMT /ArialMT-Black /ArialNarrow /ArialNarrow-Bold /ArialNarrow-BoldItalic /ArialNarrow-Italic /ArialRoundedMTBold /ArialUnicodeMS /ArrusBT-Bold /ArrusBT-BoldItalic /ArrusBT-Italic /ArrusBT-Roman /AvantGarde-Book /AvantGarde-BookOblique /AvantGarde-Demi /AvantGarde-DemiOblique /AvantGardeITCbyBT-Book /AvantGardeITCbyBT-BookOblique /BakerSignet /BankGothicBT-Medium /Barmeno-Bold /Barmeno-ExtraBold /Barmeno-Medium /Barmeno-Regular /Baskerville /BaskervilleBE-Italic /BaskervilleBE-Medium /BaskervilleBE-MediumItalic /BaskervilleBE-Regular /Baskerville-Bold /Baskerville-BoldItalic /Baskerville-Italic /BaskOldFace /Batang /BatangChe /Bauhaus93 /Bellevue /BellMT /BellMTBold /BellMTItalic /BerlingAntiqua-Bold /BerlingAntiqua-BoldItalic /BerlingAntiqua-Italic /BerlingAntiqua-Roman /BerlinSansFB-Bold /BerlinSansFBDemi-Bold /BerlinSansFB-Reg /BernardMT-Condensed /BernhardModernBT-Bold /BernhardModernBT-BoldItalic /BernhardModernBT-Italic /BernhardModernBT-Roman /BiffoMT /BinnerD /BinnerGothic /BlackadderITC-Regular /Blackoak /blex /blsy /Bodoni /Bodoni-Bold /Bodoni-BoldItalic /Bodoni-Italic /BodoniMT /BodoniMTBlack /BodoniMTBlack-Italic /BodoniMT-Bold /BodoniMT-BoldItalic /BodoniMTCondensed /BodoniMTCondensed-Bold /BodoniMTCondensed-BoldItalic /BodoniMTCondensed-Italic /BodoniMT-Italic /BodoniMTPosterCompressed /Bodoni-Poster /Bodoni-PosterCompressed /BookAntiqua /BookAntiqua-Bold /BookAntiqua-BoldItalic /BookAntiqua-Italic /Bookman-Demi /Bookman-DemiItalic /Bookman-Light /Bookman-LightItalic /BookmanOldStyle /BookmanOldStyle-Bold /BookmanOldStyle-BoldItalic /BookmanOldStyle-Italic /BookshelfSymbolOne-Regular /BookshelfSymbolSeven /BookshelfSymbolThree-Regular /BookshelfSymbolTwo-Regular /Botanical /Boton-Italic /Boton-Medium /Boton-MediumItalic /Boton-Regular /Boulevard /BradleyHandITC /Braggadocio /BritannicBold /Broadway /BrowalliaNew /BrowalliaNew-Bold /BrowalliaNew-BoldItalic /BrowalliaNew-Italic /BrowalliaUPC /BrowalliaUPC-Bold /BrowalliaUPC-BoldItalic /BrowalliaUPC-Italic /BrushScript /BrushScriptMT /CaflischScript-Bold /CaflischScript-Regular /Calibri /Calibri-Bold /Calibri-BoldItalic /Calibri-Italic /CalifornianFB-Bold /CalifornianFB-Italic /CalifornianFB-Reg /CalisMTBol /CalistoMT /CalistoMT-BoldItalic /CalistoMT-Italic /Cambria /Cambria-Bold /Cambria-BoldItalic /Cambria-Italic /CambriaMath /Candara /Candara-Bold /Candara-BoldItalic /Candara-Italic /Carta /CaslonOpenfaceBT-Regular /Castellar /CastellarMT /Centaur /Centaur-Italic /Century /CenturyGothic /CenturyGothic-Bold /CenturyGothic-BoldItalic /CenturyGothic-Italic /CenturySchL-Bold /CenturySchL-BoldItal /CenturySchL-Ital /CenturySchL-Roma /CenturySchoolbook /CenturySchoolbook-Bold /CenturySchoolbook-BoldItalic /CenturySchoolbook-Italic /CGTimes-Bold /CGTimes-BoldItalic /CGTimes-Italic /CGTimes-Regular /CharterBT-Bold /CharterBT-BoldItalic /CharterBT-Italic /CharterBT-Roman /CheltenhamITCbyBT-Bold /CheltenhamITCbyBT-BoldItalic /CheltenhamITCbyBT-Book /CheltenhamITCbyBT-BookItalic /Chiller-Regular /Cmb10 /CMB10 /Cmbsy10 /CMBSY10 /CMBSY5 /CMBSY6 /CMBSY7 /CMBSY8 /CMBSY9 /Cmbx10 /CMBX10 /Cmbx12 /CMBX12 /Cmbx5 /CMBX5 /Cmbx6 /CMBX6 /Cmbx7 /CMBX7 /Cmbx8 /CMBX8 /Cmbx9 /CMBX9 /Cmbxsl10 /CMBXSL10 /Cmbxti10 /CMBXTI10 /Cmcsc10 /CMCSC10 /Cmcsc8 /CMCSC8 /Cmcsc9 /CMCSC9 /Cmdunh10 /CMDUNH10 /Cmex10 /CMEX10 /CMEX7 /CMEX8 /CMEX9 /Cmff10 /CMFF10 /Cmfi10 /CMFI10 /Cmfib8 /CMFIB8 /Cminch /CMINCH /Cmitt10 /CMITT10 /Cmmi10 /CMMI10 /Cmmi12 /CMMI12 /Cmmi5 /CMMI5 /Cmmi6 /CMMI6 /Cmmi7 /CMMI7 /Cmmi8 /CMMI8 /Cmmi9 /CMMI9 /Cmmib10 /CMMIB10 /CMMIB5 /CMMIB6 /CMMIB7 /CMMIB8 /CMMIB9 /Cmr10 /CMR10 /Cmr12 /CMR12 /Cmr17 /CMR17 /Cmr5 /CMR5 /Cmr6 /CMR6 /Cmr7 /CMR7 /Cmr8 /CMR8 /Cmr9 /CMR9 /Cmsl10 /CMSL10 /Cmsl12 /CMSL12 /Cmsl8 /CMSL8 /Cmsl9 /CMSL9 /Cmsltt10 /CMSLTT10 /Cmss10 /CMSS10 /Cmss12 /CMSS12 /Cmss17 /CMSS17 /Cmss8 /CMSS8 /Cmss9 /CMSS9 /Cmssbx10 /CMSSBX10 /Cmssdc10 /CMSSDC10 /Cmssi10 /CMSSI10 /Cmssi12 /CMSSI12 /Cmssi17 /CMSSI17 /Cmssi8 /CMSSI8 /Cmssi9 /CMSSI9 /Cmssq8 /CMSSQ8 /Cmssqi8 /CMSSQI8 /Cmsy10 /CMSY10 /Cmsy5 /CMSY5 /Cmsy6 /CMSY6 /Cmsy7 /CMSY7 /Cmsy8 /CMSY8 /Cmsy9 /CMSY9 /Cmtcsc10 /CMTCSC10 /Cmtex10 /CMTEX10 /Cmtex8 /CMTEX8 /Cmtex9 /CMTEX9 /Cmti10 /CMTI10 /Cmti12 /CMTI12 /Cmti7 /CMTI7 /Cmti8 /CMTI8 /Cmti9 /CMTI9 /Cmtt10 /CMTT10 /Cmtt12 /CMTT12 /Cmtt8 /CMTT8 /Cmtt9 /CMTT9 /Cmu10 /CMU10 /Cmvtt10 /CMVTT10 /ColonnaMT /Colossalis-Bold /ComicSansMS /ComicSansMS-Bold /Consolas /Consolas-Bold /Consolas-BoldItalic /Consolas-Italic /Constantia /Constantia-Bold /Constantia-BoldItalic /Constantia-Italic /CooperBlack /CopperplateGothic-Bold /CopperplateGothic-Light /Copperplate-ThirtyThreeBC /Corbel /Corbel-Bold /Corbel-BoldItalic /Corbel-Italic /CordiaNew /CordiaNew-Bold</s>
<s>/CordiaNew-BoldItalic /CordiaNew-Italic /CordiaUPC /CordiaUPC-Bold /CordiaUPC-BoldItalic /CordiaUPC-Italic /Courier /Courier-Bold /Courier-BoldOblique /CourierNewPS-BoldItalicMT /CourierNewPS-BoldMT /CourierNewPS-ItalicMT /CourierNewPSMT /Courier-Oblique /CourierStd /CourierStd-Bold /CourierStd-BoldOblique /CourierStd-Oblique /CourierX-Bold /CourierX-BoldOblique /CourierX-Oblique /CourierX-Regular /CreepyRegular /CurlzMT /David-Bold /David-Reg /DavidTransparent /Dcb10 /Dcbx10 /Dcbxsl10 /Dcbxti10 /Dccsc10 /Dcitt10 /Dcr10 /Desdemona /DilleniaUPC /DilleniaUPCBold /DilleniaUPCBoldItalic /DilleniaUPCItalic /Dingbats /DomCasual /Dotum /DotumChe /DoulosSIL /EdwardianScriptITC /Elephant-Italic /Elephant-Regular /EngraversGothicBT-Regular /EngraversMT /EraserDust /ErasITC-Bold /ErasITC-Demi /ErasITC-Light /ErasITC-Medium /ErieBlackPSMT /ErieLightPSMT /EriePSMT /EstrangeloEdessa /Euclid /Euclid-Bold /Euclid-BoldItalic /EuclidExtra /EuclidExtra-Bold /EuclidFraktur /EuclidFraktur-Bold /Euclid-Italic /EuclidMathOne /EuclidMathOne-Bold /EuclidMathTwo /EuclidMathTwo-Bold /EuclidSymbol /EuclidSymbol-Bold /EuclidSymbol-BoldItalic /EuclidSymbol-Italic /EucrosiaUPC /EucrosiaUPCBold /EucrosiaUPCBoldItalic /EucrosiaUPCItalic /EUEX10 /EUEX7 /EUEX8 /EUEX9 /EUFB10 /EUFB5 /EUFB7 /EUFM10 /EUFM5 /EUFM7 /EURB10 /EURB5 /EURB7 /EURM10 /EURM5 /EURM7 /EuroMono-Bold /EuroMono-BoldItalic /EuroMono-Italic /EuroMono-Regular /EuroSans-Bold /EuroSans-BoldItalic /EuroSans-Italic /EuroSans-Regular /EuroSerif-Bold /EuroSerif-BoldItalic /EuroSerif-Italic /EuroSerif-Regular /EUSB10 /EUSB5 /EUSB7 /EUSM10 /EUSM5 /EUSM7 /FelixTitlingMT /Fences /FencesPlain /FigaroMT /FixedMiriamTransparent /FootlightMTLight /Formata-Italic /Formata-Medium /Formata-MediumItalic /Formata-Regular /ForteMT /FranklinGothic-Book /FranklinGothic-BookItalic /FranklinGothic-Demi /FranklinGothic-DemiCond /FranklinGothic-DemiItalic /FranklinGothic-Heavy /FranklinGothic-HeavyItalic /FranklinGothicITCbyBT-Book /FranklinGothicITCbyBT-BookItal /FranklinGothicITCbyBT-Demi /FranklinGothicITCbyBT-DemiItal /FranklinGothic-Medium /FranklinGothic-MediumCond /FranklinGothic-MediumItalic /FrankRuehl /FreesiaUPC /FreesiaUPCBold /FreesiaUPCBoldItalic /FreesiaUPCItalic /FreestyleScript-Regular /FrenchScriptMT /Frutiger-Black /Frutiger-BlackCn /Frutiger-BlackItalic /Frutiger-Bold /Frutiger-BoldCn /Frutiger-BoldItalic /Frutiger-Cn /Frutiger-ExtraBlackCn /Frutiger-Italic /Frutiger-Light /Frutiger-LightCn /Frutiger-LightItalic /Frutiger-Roman /Frutiger-UltraBlack /Futura-Bold /Futura-BoldOblique /Futura-Book /Futura-BookOblique /FuturaBT-Bold /FuturaBT-BoldItalic /FuturaBT-Book /FuturaBT-BookItalic /FuturaBT-Medium /FuturaBT-MediumItalic /Futura-Light /Futura-LightOblique /GalliardITCbyBT-Bold /GalliardITCbyBT-BoldItalic /GalliardITCbyBT-Italic /GalliardITCbyBT-Roman /Garamond /Garamond-Bold /Garamond-BoldCondensed /Garamond-BoldCondensedItalic /Garamond-BoldItalic /Garamond-BookCondensed /Garamond-BookCondensedItalic /Garamond-Italic /Garamond-LightCondensed /Garamond-LightCondensedItalic /Gautami /GeometricSlab703BT-Light /GeometricSlab703BT-LightItalic /Georgia /Georgia-Bold /Georgia-BoldItalic /Georgia-Italic /GeorgiaRef /Giddyup /Giddyup-Thangs /Gigi-Regular /GillSans /GillSans-Bold /GillSans-BoldItalic /GillSans-Condensed /GillSans-CondensedBold /GillSans-Italic /GillSans-Light /GillSans-LightItalic /GillSansMT /GillSansMT-Bold /GillSansMT-BoldItalic /GillSansMT-Condensed /GillSansMT-ExtraCondensedBold /GillSansMT-Italic /GillSans-UltraBold /GillSans-UltraBoldCondensed /GloucesterMT-ExtraCondensed /Gothic-Thirteen /GoudyOldStyleBT-Bold /GoudyOldStyleBT-BoldItalic /GoudyOldStyleBT-Italic /GoudyOldStyleBT-Roman /GoudyOldStyleT-Bold /GoudyOldStyleT-Italic /GoudyOldStyleT-Regular /GoudyStout /GoudyTextMT-LombardicCapitals /GSIDefaultSymbols /Gulim /GulimChe /Gungsuh /GungsuhChe /Haettenschweiler /HarlowSolid /Harrington /Helvetica /Helvetica-Black /Helvetica-BlackOblique /Helvetica-Bold /Helvetica-BoldOblique /Helvetica-Condensed /Helvetica-Condensed-Black /Helvetica-Condensed-BlackObl /Helvetica-Condensed-Bold /Helvetica-Condensed-BoldObl /Helvetica-Condensed-Light /Helvetica-Condensed-LightObl /Helvetica-Condensed-Oblique /Helvetica-Fraction /Helvetica-Narrow /Helvetica-Narrow-Bold /Helvetica-Narrow-BoldOblique /Helvetica-Narrow-Oblique /Helvetica-Oblique /HighTowerText-Italic /HighTowerText-Reg /Humanist521BT-BoldCondensed /Humanist521BT-Light /Humanist521BT-LightItalic /Humanist521BT-RomanCondensed /Imago-ExtraBold /Impact /ImprintMT-Shadow /InformalRoman-Regular /IrisUPC /IrisUPCBold /IrisUPCBoldItalic /IrisUPCItalic /Ironwood /ItcEras-Medium /ItcKabel-Bold /ItcKabel-Book /ItcKabel-Demi /ItcKabel-Medium /ItcKabel-Ultra /JasmineUPC /JasmineUPC-Bold /JasmineUPC-BoldItalic /JasmineUPC-Italic /JoannaMT /JoannaMT-Italic /Jokerman-Regular /JuiceITC-Regular /Kartika /Kaufmann /KaufmannBT-Bold /KaufmannBT-Regular /KidTYPEPaint /KinoMT /KodchiangUPC /KodchiangUPC-Bold /KodchiangUPC-BoldItalic /KodchiangUPC-Italic /KorinnaITCbyBT-Regular /KristenITC-Regular /KrutiDev040Bold /KrutiDev040BoldItalic /KrutiDev040Condensed /KrutiDev040Italic /KrutiDev040Thin /KrutiDev040Wide /KrutiDev060 /KrutiDev060Bold /KrutiDev060BoldItalic /KrutiDev060Condensed /KrutiDev060Italic /KrutiDev060Thin /KrutiDev060Wide /KrutiDev070 /KrutiDev070Condensed /KrutiDev070Italic /KrutiDev070Thin /KrutiDev070Wide /KrutiDev080 /KrutiDev080Condensed /KrutiDev080Italic /KrutiDev080Wide /KrutiDev090 /KrutiDev090Bold /KrutiDev090BoldItalic /KrutiDev090Condensed /KrutiDev090Italic /KrutiDev090Thin /KrutiDev090Wide /KrutiDev100 /KrutiDev100Bold /KrutiDev100BoldItalic /KrutiDev100Condensed /KrutiDev100Italic /KrutiDev100Thin /KrutiDev100Wide /KrutiDev120 /KrutiDev120Condensed /KrutiDev120Thin /KrutiDev120Wide /KrutiDev130 /KrutiDev130Condensed /KrutiDev130Thin /KrutiDev130Wide /KunstlerScript /Latha /LatinWide /LetterGothic /LetterGothic-Bold /LetterGothic-BoldOblique /LetterGothic-BoldSlanted /LetterGothicMT /LetterGothicMT-Bold /LetterGothicMT-BoldOblique /LetterGothicMT-Oblique /LetterGothic-Slanted /LevenimMT /LevenimMTBold /LilyUPC /LilyUPCBold /LilyUPCBoldItalic /LilyUPCItalic /Lithos-Black /Lithos-Regular /LotusWPBox-Roman /LotusWPIcon-Roman /LotusWPIntA-Roman /LotusWPIntB-Roman /LotusWPType-Roman /LucidaBright /LucidaBright-Demi /LucidaBright-DemiItalic /LucidaBright-Italic /LucidaCalligraphy-Italic /LucidaConsole /LucidaFax /LucidaFax-Demi /LucidaFax-DemiItalic /LucidaFax-Italic /LucidaHandwriting-Italic /LucidaSans /LucidaSans-Demi /LucidaSans-DemiItalic /LucidaSans-Italic /LucidaSans-Typewriter /LucidaSans-TypewriterBold /LucidaSans-TypewriterBoldOblique /LucidaSans-TypewriterOblique /LucidaSansUnicode /Lydian /Magneto-Bold /MaiandraGD-Regular /Mangal-Regular /Map-Symbols /MathA /MathB /MathC /Mathematica1 /Mathematica1-Bold /Mathematica1Mono /Mathematica1Mono-Bold /Mathematica2 /Mathematica2-Bold /Mathematica2Mono /Mathematica2Mono-Bold /Mathematica3 /Mathematica3-Bold /Mathematica3Mono /Mathematica3Mono-Bold /Mathematica4 /Mathematica4-Bold /Mathematica4Mono /Mathematica4Mono-Bold /Mathematica5 /Mathematica5-Bold /Mathematica5Mono /Mathematica5Mono-Bold /Mathematica6 /Mathematica6Bold /Mathematica6Mono /Mathematica6MonoBold /Mathematica7 /Mathematica7Bold /Mathematica7Mono /Mathematica7MonoBold /MatisseITC-Regular /MaturaMTScriptCapitals /Mesquite /Mezz-Black /Mezz-Regular /MICR /MicrosoftSansSerif /MingLiU /Minion-BoldCondensed /Minion-BoldCondensedItalic /Minion-Condensed /Minion-CondensedItalic /Minion-Ornaments /MinionPro-Bold /MinionPro-BoldIt /MinionPro-It /MinionPro-Regular /Miriam /MiriamFixed /MiriamTransparent /Mistral /Modern-Regular /MonotypeCorsiva /MonotypeSorts /MSAM10 /MSAM5 /MSAM6 /MSAM7 /MSAM8 /MSAM9 /MSBM10 /MSBM5 /MSBM6 /MSBM7 /MSBM8 /MSBM9 /MS-Gothic /MSHei /MSLineDrawPSMT /MS-Mincho /MSOutlook /MS-PGothic /MS-PMincho /MSReference1 /MSReference2 /MSReferenceSansSerif /MSReferenceSansSerif-Bold /MSReferenceSansSerif-BoldItalic /MSReferenceSansSerif-Italic /MSReferenceSerif /MSReferenceSerif-Bold /MSReferenceSerif-BoldItalic /MSReferenceSerif-Italic /MSReferenceSpecialty /MSSong /MS-UIGothic /MT-Extra /MTExtraTiger /MT-Symbol /MT-Symbol-Italic /MVBoli /Myriad-Bold /Myriad-BoldItalic /Myriad-Italic /Myriad-Roman /Narkisim /NewCenturySchlbk-Bold /NewCenturySchlbk-BoldItalic /NewCenturySchlbk-Italic /NewCenturySchlbk-Roman /NewMilleniumSchlbk-BoldItalicSH /NewsGothic /NewsGothic-Bold /NewsGothicBT-Bold /NewsGothicBT-BoldItalic /NewsGothicBT-Italic /NewsGothicBT-Roman /NewsGothic-Condensed /NewsGothic-Italic /NewsGothicMT /NewsGothicMT-Bold /NewsGothicMT-Italic /NiagaraEngraved-Reg /NiagaraSolid-Reg /NimbusMonL-Bold</s>
<s>/NimbusMonL-BoldObli /NimbusMonL-Regu /NimbusMonL-ReguObli /NimbusRomNo9L-Medi /NimbusRomNo9L-MediItal /NimbusRomNo9L-Regu /NimbusRomNo9L-ReguItal /NimbusSanL-Bold /NimbusSanL-BoldCond /NimbusSanL-BoldCondItal /NimbusSanL-BoldItal /NimbusSanL-Regu /NimbusSanL-ReguCond /NimbusSanL-ReguCondItal /NimbusSanL-ReguItal /Nimrod /Nimrod-Bold /Nimrod-BoldItalic /Nimrod-Italic /NSimSun /Nueva-BoldExtended /Nueva-BoldExtendedItalic /Nueva-Italic /Nueva-Roman /NuptialScript /OCRA /OCRA-Alternate /OCRAExtended /OCRB /OCRB-Alternate /OfficinaSans-Bold /OfficinaSans-BoldItalic /OfficinaSans-Book /OfficinaSans-BookItalic /OfficinaSerif-Bold /OfficinaSerif-BoldItalic /OfficinaSerif-Book /OfficinaSerif-BookItalic /OldEnglishTextMT /Onyx /OnyxBT-Regular /OzHandicraftBT-Roman /PalaceScriptMT /Palatino-Bold /Palatino-BoldItalic /Palatino-Italic /PalatinoLinotype-Bold /PalatinoLinotype-BoldItalic /PalatinoLinotype-Italic /PalatinoLinotype-Roman /Palatino-Roman /PapyrusPlain /Papyrus-Regular /Parchment-Regular /Parisian /ParkAvenue /Penumbra-SemiboldFlare /Penumbra-SemiboldSans /Penumbra-SemiboldSerif /PepitaMT /Perpetua /Perpetua-Bold /Perpetua-BoldItalic /Perpetua-Italic /PerpetuaTitlingMT-Bold /PerpetuaTitlingMT-Light /PhotinaCasualBlack /Playbill /PMingLiU /Poetica-SuppOrnaments /PoorRichard-Regular /PopplLaudatio-Italic /PopplLaudatio-Medium /PopplLaudatio-MediumItalic /PopplLaudatio-Regular /PrestigeElite /Pristina-Regular /PTBarnumBT-Regular /Raavi /RageItalic /Ravie /RefSpecialty /Ribbon131BT-Bold /Rockwell /Rockwell-Bold /Rockwell-BoldItalic /Rockwell-Condensed /Rockwell-CondensedBold /Rockwell-ExtraBold /Rockwell-Italic /Rockwell-Light /Rockwell-LightItalic /Rod /RodTransparent /RunicMT-Condensed /Sanvito-Light /Sanvito-Roman /ScriptC /ScriptMTBold /SegoeUI /SegoeUI-Bold /SegoeUI-BoldItalic /SegoeUI-Italic /Serpentine-BoldOblique /ShelleyVolanteBT-Regular /ShowcardGothic-Reg /Shruti /SILDoulosIPA /SimHei /SimSun /SimSun-PUA /SnapITC-Regular /StandardSymL /Stencil /StoneSans /StoneSans-Bold /StoneSans-BoldItalic /StoneSans-Italic /StoneSans-Semibold /StoneSans-SemiboldItalic /Stop /Swiss721BT-BlackExtended /Sylfaen /Symbol /SymbolMT /SymbolTiger /SymbolTigerExpert /Tahoma /Tahoma-Bold /Tci1 /Tci1Bold /Tci1BoldItalic /Tci1Italic /Tci2 /Tci2Bold /Tci2BoldItalic /Tci2Italic /Tci3 /Tci3Bold /Tci3BoldItalic /Tci3Italic /Tci4 /Tci4Bold /Tci4BoldItalic /Tci4Italic /TechnicalItalic /TechnicalPlain /Tekton /Tekton-Bold /TektonMM /Tempo-HeavyCondensed /Tempo-HeavyCondensedItalic /TempusSansITC /Tiger /TigerExpert /Times-Bold /Times-BoldItalic /Times-BoldItalicOsF /Times-BoldSC /Times-ExtraBold /Times-Italic /Times-ItalicOsF /TimesNewRomanMT-ExtraBold /TimesNewRomanPS-BoldItalicMT /TimesNewRomanPS-BoldMT /TimesNewRomanPS-ItalicMT /TimesNewRomanPSMT /Times-Roman /Times-RomanSC /Trajan-Bold /Trebuchet-BoldItalic /TrebuchetMS /TrebuchetMS-Bold /TrebuchetMS-Italic /Tunga-Regular /TwCenMT-Bold /TwCenMT-BoldItalic /TwCenMT-Condensed /TwCenMT-CondensedBold /TwCenMT-CondensedExtraBold /TwCenMT-CondensedMedium /TwCenMT-Italic /TwCenMT-Regular /Univers-Bold /Univers-BoldItalic /UniversCondensed-Bold /UniversCondensed-BoldItalic /UniversCondensed-Medium /UniversCondensed-MediumItalic /Univers-Medium /Univers-MediumItalic /URWBookmanL-DemiBold /URWBookmanL-DemiBoldItal /URWBookmanL-Ligh /URWBookmanL-LighItal /URWChanceryL-MediItal /URWGothicL-Book /URWGothicL-BookObli /URWGothicL-Demi /URWGothicL-DemiObli /URWPalladioL-Bold /URWPalladioL-BoldItal /URWPalladioL-Ital /URWPalladioL-Roma /USPSBarCode /VAGRounded-Black /VAGRounded-Bold /VAGRounded-Light /VAGRounded-Thin /Verdana /Verdana-Bold /Verdana-BoldItalic /Verdana-Italic /VerdanaRef /VinerHandITC /Viva-BoldExtraExtended /Vivaldii /Viva-LightCondensed /Viva-Regular /VladimirScript /Vrinda /Webdings /Westminster /Willow /Wingdings2 /Wingdings3 /Wingdings-Regular /WNCYB10 /WNCYI10 /WNCYR10 /WNCYSC10 /WNCYSS10 /WoodtypeOrnaments-One /WoodtypeOrnaments-Two /WP-ArabicScriptSihafa /WP-ArabicSihafa /WP-BoxDrawing /WP-CyrillicA /WP-CyrillicB /WP-GreekCentury /WP-GreekCourier /WP-GreekHelve /WP-HebrewDavid /WP-IconicSymbolsA /WP-IconicSymbolsB /WP-Japanese /WP-MathA /WP-MathB /WP-MathExtendedA /WP-MathExtendedB /WP-MultinationalAHelve /WP-MultinationalARoman /WP-MultinationalBCourier /WP-MultinationalBHelve /WP-MultinationalBRoman /WP-MultinationalCourier /WP-Phonetic /WPTypographicSymbols /XYATIP10 /XYBSQL10 /XYBTIP10 /XYCIRC10 /XYCMAT10 /XYCMBT10 /XYDASH10 /XYEUAT10 /XYEUBT10 /ZapfChancery-MediumItalic /ZapfDingbats /ZapfHumanist601BT-Bold /ZapfHumanist601BT-BoldItalic /ZapfHumanist601BT-Demi /ZapfHumanist601BT-DemiItalic /ZapfHumanist601BT-Italic /ZapfHumanist601BT-Roman /ZWAdobeF /NeverEmbed [ true /AntiAliasColorImages false /CropColorImages true /ColorImageMinResolution 150 /ColorImageMinResolutionPolicy /OK /DownsampleColorImages true /ColorImageDownsampleType /Bicubic /ColorImageResolution 300 /ColorImageDepth -1 /ColorImageMinDownsampleDepth 1 /ColorImageDownsampleThreshold 2.00333 /EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages true /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /ColorImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 150 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 2.00333 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /GrayImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 1200 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 600 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.00167 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1 /AllowPSXObjects false /CheckCompliance [ /None /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXOutputIntentProfile (None) /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False /CreateJDFFile false /Description << /ARA <FEFF06270633062A062E062F0645002006470630064700200627064406250639062F0627062F0627062A002006440625064606340627062100200648062B062706260642002000410064006F00620065002000500044004600200645062A064806270641064206290020064506390020064506420627064A064A0633002006390631063600200648063706280627063906290020062706440648062B0627062606420020062706440645062A062F062706480644062900200641064A00200645062C062706440627062A002006270644062306390645062706440020062706440645062E062A064406410629061B0020064A06450643064600200641062A062D00200648062B0627062606420020005000440046002006270644064506460634062306290020062806270633062A062E062F062706450020004100630072006F0062006100740020064800410064006F006200650020005200650061006400650072002006250635062F0627063100200035002E0030002006480627064406250635062F062706310627062A0020062706440623062D062F062B002E> /CHS <FEFF4f7f75288fd94e9b8bbe5b9a521b5efa7684002000410064006f006200650020005000440046002065876863900275284e8e55464e1a65876863768467e5770b548c62535370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c676562535f00521b5efa768400200050004400460020658768633002> /CHT <FEFF4f7f752890194e9b8a2d7f6e5efa7acb7684002000410064006f006200650020005000440046002065874ef69069752865bc666e901a554652d965874ef6768467e5770b548c52175370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c4f86958b555f5df25efa7acb76840020005000440046002065874ef63002> /CZE <FEFF005400610074006f0020006e006100730074006100760065006e00ed00200070006f0075017e0069006a007400650020006b0020007600790074007600e101590065006e00ed00200064006f006b0075006d0065006e0074016f002000410064006f006200650020005000440046002000760068006f0064006e00fd00630068002000700072006f002000730070006f006c00650068006c0069007600e90020007a006f006200720061007a006f007600e1006e00ed002000610020007400690073006b0020006f006200630068006f0064006e00ed0063006800200064006f006b0075006d0065006e0074016f002e002000200056007900740076006f01590065006e00e900200064006f006b0075006d0065006e007400790020005000440046002000620075006400650020006d006f017e006e00e90020006f007400650076015900ed007400200076002000700072006f006700720061006d0065006300680020004100630072006f00620061007400200061002000410064006f00620065002000520065006100640065007200200035002e0030002000610020006e006f0076011b006a016100ed00630068002e> /DAN <FEFF004200720075006700200069006e0064007300740069006c006c0069006e006700650072006e0065002000740069006c0020006100740020006f007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400650072002c0020006400650072002000650067006e006500720020007300690067002000740069006c00200064006500740061006c006a006500720065007400200073006b00e60072006d007600690073006e0069006e00670020006f00670020007500640073006b007200690076006e0069006e006700200061006600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020004400650020006f007000720065007400740065006400650020005000440046002d0064006f006b0075006d0065006e0074006500720020006b0061006e002000e50062006e00650073002000690020004100630072006f00620061007400200065006c006c006500720020004100630072006f006200610074002000520065006100640065007200200035002e00300020006f00670020006e0079006500720065002e></s>
<s>/DEU <FEFF00560065007200770065006e00640065006e0020005300690065002000640069006500730065002000450069006e007300740065006c006c0075006e00670065006e0020007a0075006d002000450072007300740065006c006c0065006e00200076006f006e002000410064006f006200650020005000440046002d0044006f006b0075006d0065006e00740065006e002c00200075006d002000650069006e00650020007a0075007600650072006c00e40073007300690067006500200041006e007a006500690067006500200075006e00640020004100750073006700610062006500200076006f006e00200047006500730063006800e40066007400730064006f006b0075006d0065006e00740065006e0020007a0075002000650072007a00690065006c0065006e002e00200044006900650020005000440046002d0044006f006b0075006d0065006e007400650020006b00f6006e006e0065006e0020006d006900740020004100630072006f00620061007400200075006e0064002000520065006100640065007200200035002e003000200075006e00640020006800f600680065007200200067006500f600660066006e00650074002000770065007200640065006e002e> /ESP <FEFF005500740069006c0069006300650020006500730074006100200063006f006e0066006900670075007200610063006900f3006e0020007000610072006100200063007200650061007200200064006f00630075006d0065006e0074006f0073002000640065002000410064006f00620065002000500044004600200061006400650063007500610064006f007300200070006100720061002000760069007300750061006c0069007a00610063006900f3006e0020006500200069006d0070007200650073006900f3006e00200064006500200063006f006e006600690061006e007a006100200064006500200064006f00630075006d0065006e0074006f007300200063006f006d00650072006300690061006c00650073002e002000530065002000700075006500640065006e00200061006200720069007200200064006f00630075006d0065006e0074006f00730020005000440046002000630072006500610064006f007300200063006f006e0020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e003000200079002000760065007200730069006f006e0065007300200070006f00730074006500720069006f007200650073002e> /FRA <FEFF005500740069006c006900730065007a00200063006500730020006f007000740069006f006e00730020006100660069006e00200064006500200063007200e900650072002000640065007300200064006f00630075006d0065006e00740073002000410064006f006200650020005000440046002000700072006f00660065007300730069006f006e006e0065006c007300200066006900610062006c0065007300200070006f007500720020006c0061002000760069007300750061006c00690073006100740069006f006e0020006500740020006c00270069006d007000720065007300730069006f006e002e0020004c0065007300200064006f00630075006d0065006e00740073002000500044004600200063007200e900e90073002000700065007500760065006e0074002000ea0074007200650020006f007500760065007200740073002000640061006e00730020004100630072006f006200610074002c002000610069006e00730069002000710075002700410064006f00620065002000520065006100640065007200200035002e0030002000650074002000760065007200730069006f006e007300200075006c007400e90072006900650075007200650073002e> /GRE <FEFF03a703c103b703c303b903bc03bf03c003bf03b903ae03c303c403b5002003b103c503c403ad03c2002003c403b903c2002003c103c503b803bc03af03c303b503b903c2002003b303b903b1002003bd03b1002003b403b703bc03b903bf03c503c103b303ae03c303b503c403b5002003ad03b303b303c103b103c603b1002000410064006f006200650020005000440046002003ba03b103c403ac03bb03bb03b703bb03b1002003b303b903b1002003b103be03b903cc03c003b903c303c403b7002003c003c103bf03b203bf03bb03ae002003ba03b103b9002003b503ba03c403cd03c003c903c303b7002003b503c003b903c703b503b903c103b703bc03b103c403b903ba03ce03bd002003b503b303b303c103ac03c603c903bd002e0020002003a403b10020005000440046002003ad03b303b303c103b103c603b1002003c003bf03c5002003ad03c703b503c403b5002003b403b703bc03b903bf03c503c103b303ae03c303b503b9002003bc03c003bf03c103bf03cd03bd002003bd03b1002003b103bd03bf03b903c703c403bf03cd03bd002003bc03b5002003c403bf0020004100630072006f006200610074002c002003c403bf002000410064006f00620065002000520065006100640065007200200035002e0030002003ba03b103b9002003bc03b503c403b103b303b503bd03ad03c303c403b503c103b503c2002003b503ba03b403cc03c303b503b903c2002e> /HEB <FEFF05D405E905EA05DE05E905D5002005D105D405D205D305E805D505EA002005D005DC05D4002005DB05D305D9002005DC05D905E605D505E8002005DE05E105DE05DB05D9002000410064006F006200650020005000440046002005E205D105D505E8002005D405E605D205D4002005D505D405D305E405E105D4002005D005DE05D905E005D4002005E905DC002005DE05E105DE05DB05D905DD002005E205E105E705D905D905DD002E002005DE05E105DE05DB05D90020005000440046002005E905E005D505E605E805D5002005E005D905EA05E005D905DD002005DC05E405EA05D905D705D4002005D105D005DE05E605E205D505EA0020004100630072006F006200610074002005D5002D00410064006F00620065002000520065006100640065007200200035002E0030002005D505D205E805E105D005D505EA002005DE05EA05E705D305DE05D505EA002005D905D505EA05E8002E05D905D505EA05E8002E002D0033002C002005E205D905D905E005D5002005D105DE05D305E805D905DA002005DC05DE05E905EA05DE05E9002005E905DC0020004100630072006F006200610074002E002005DE05E105DE05DB05D90020005000440046002005E905E005D505E605E805D5002005E005D905EA05E005D905DD002005DC05E405EA05D905D705D4002005D105D005DE05E605E205D505EA0020004100630072006F006200610074002005D5002D00410064006F00620065002000520065006100640065007200200035002E0030002005D505D205E805E105D005D505EA002005DE05EA05E705D305DE05D505EA002005D905D505EA05E8002E> /HRV (Za stvaranje Adobe PDF dokumenata pogodnih za pouzdani prikaz i ispis poslovnih dokumenata koristite ove postavke. Stvoreni PDF dokumenti mogu se otvoriti Acrobat i Adobe Reader 5.0 i kasnijim verzijama.) /HUN <FEFF00410020006800690076006100740061006c006f007300200064006f006b0075006d0065006e00740075006d006f006b0020006d00650067006200ed007a00680061007400f30020006d0065006700740065006b0069006e007400e9007300e900720065002000e900730020006e0079006f006d00740061007400e1007300e10072006100200073007a00e1006e0074002000410064006f00620065002000500044004600200064006f006b0075006d0065006e00740075006d006f006b0061007400200065007a0065006b006b0065006c0020006100200062006500e1006c006c00ed007400e10073006f006b006b0061006c00200068006f007a006800610074006a00610020006c00e9007400720065002e0020002000410020006c00e90074007200650068006f007a006f00740074002000500044004600200064006f006b0075006d0065006e00740075006d006f006b00200061007a0020004100630072006f006200610074002000e9007300200061007a002000410064006f00620065002000520065006100640065007200200035002e0030002c0020007600610067007900200061007a002000610074007400f3006c0020006b00e9007301510062006200690020007600650072007a006900f3006b006b0061006c0020006e00790069007400680061007400f3006b0020006d00650067002e> /ITA (Utilizzare queste impostazioni per creare documenti Adobe PDF adatti per visualizzare e stampare documenti aziendali in modo affidabile. I documenti PDF creati possono essere aperti con Acrobat e Adobe Reader 5.0 e versioni successive.) /JPN <FEFF30d330b830cd30b9658766f8306e8868793a304a3088307353705237306b90693057305f002000410064006f0062006500200050004400460020658766f8306e4f5c6210306b4f7f75283057307e305930023053306e8a2d5b9a30674f5c62103055308c305f0020005000440046002030d530a130a430eb306f3001004100630072006f0062006100740020304a30883073002000410064006f00620065002000520065006100640065007200200035002e003000204ee5964d3067958b304f30533068304c3067304d307e305930023053306e8a2d5b9a3067306f30d530a930f330c8306e57cb30818fbc307f3092884c3044307e30593002> /KOR <FEFFc7740020c124c815c7440020c0acc6a9d558c5ec0020be44c988b2c8c2a40020bb38c11cb97c0020c548c815c801c73cb85c0020bcf4ace00020c778c1c4d558b2940020b3700020ac00c7a50020c801d569d55c002000410064006f0062006500200050004400460020bb38c11cb97c0020c791c131d569b2c8b2e4002e0020c774b807ac8c0020c791c131b41c00200050004400460020bb38c11cb2940020004100630072006f0062006100740020bc0f002000410064006f00620065002000520065006100640065007200200035002e00300020c774c0c1c5d0c11c0020c5f40020c2180020c788c2b5b2c8b2e4002e> /NLD (Gebruik deze instellingen om Adobe PDF-documenten te maken waarmee zakelijke documenten betrouwbaar kunnen worden weergegeven en afgedrukt. De gemaakte PDF-documenten kunnen worden geopend met Acrobat en Adobe Reader 5.0 en hoger.) /NOR <FEFF004200720075006b00200064006900730073006500200069006e006e007300740069006c006c0069006e00670065006e0065002000740069006c002000e50020006f0070007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e00740065007200200073006f006d002000650072002000650067006e0065007400200066006f00720020007000e5006c006900740065006c006900670020007600690073006e0069006e00670020006f00670020007500740073006b007200690066007400200061007600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020005000440046002d0064006f006b0075006d0065006e00740065006e00650020006b0061006e002000e50070006e00650073002000690020004100630072006f00620061007400200065006c006c00650072002000410064006f00620065002000520065006100640065007200200035002e003000200065006c006c00650072002e> /POL <FEFF0055007300740061007700690065006e0069006100200064006f002000740077006f0072007a0065006e0069006100200064006f006b0075006d0065006e007400f300770020005000440046002000700072007a0065007a006e00610063007a006f006e00790063006800200064006f0020006e00690065007a00610077006f0064006e00650067006f002000770079015b0077006900650074006c0061006e00690061002000690020006400720075006b006f00770061006e0069006100200064006f006b0075006d0065006e007400f300770020006600690072006d006f0077007900630068002e002000200044006f006b0075006d0065006e0074007900200050004400460020006d006f017c006e00610020006f007400770069006500720061010700200077002000700072006f006700720061006d006900650020004100630072006f00620061007400200069002000410064006f00620065002000520065006100640065007200200035002e0030002000690020006e006f00770073007a0079006d002e> /PTB <FEFF005500740069006c0069007a006500200065007300730061007300200063006f006e00660069006700750072006100e700f50065007300200064006500200066006f0072006d00610020006100200063007200690061007200200064006f00630075006d0065006e0074006f0073002000410064006f00620065002000500044004600200061006400650071007500610064006f00730020007000610072006100200061002000760069007300750061006c0069007a006100e700e3006f002000650020006100200069006d0070007200650073007300e3006f00200063006f006e0066006900e1007600650069007300200064006500200064006f00630075006d0065006e0074006f007300200063006f006d0065007200630069006100690073002e0020004f007300200064006f00630075006d0065006e0074006f00730020005000440046002000630072006900610064006f007300200070006f00640065006d0020007300650072002000610062006500720074006f007300200063006f006d0020006f0020004100630072006f006200610074002000650020006f002000410064006f00620065002000520065006100640065007200200035002e0030002000650020007600650072007300f50065007300200070006f00730074006500720069006f007200650073002e> /RUM <FEFF005500740069006c0069007a00610163006900200061006300650073007400650020007300650074010300720069002000700065006e007400720075002000610020006300720065006100200064006f00630075006d0065006e00740065002000410064006f006200650020005000440046002000610064006500630076006100740065002000700065006e007400720075002000760069007a00750061006c0069007a00610072006500610020015f006900200074006900700103007200690072006500610020006c0061002000630061006c006900740061007400650020007300750070006500720069006f0061007201030020006100200064006f00630075006d0065006e00740065006c006f007200200064006500200061006600610063006500720069002e002000200044006f00630075006d0065006e00740065006c00650020005000440046002000630072006500610074006500200070006f00740020006600690020006400650073006300680069007300650020006300750020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e00300020015f00690020007600650072007300690075006e0069006c006500200075006c0074006500720069006f006100720065002e> /RUS <FEFF04180441043f043e043b044c04370443043904420435002004340430043d043d044b04350020043d0430044104420440043e0439043a043800200434043b044f00200441043e043704340430043d0438044f00200434043e043a0443043c0435043d0442043e0432002000410064006f006200650020005000440046002c0020043f043e04340445043e0434044f04490438044500200434043b044f0020043d0430043404350436043d043e0433043e0020043f0440043e0441043c043e044204400430002004380020043f04350447043004420438002004340435043b043e0432044b044500200434043e043a0443043c0435043d0442043e0432002e002000200421043e043704340430043d043d044b04350020005000440046002d0434043e043a0443043c0435043d0442044b0020043c043e0436043d043e0020043e0442043a0440044b043204300442044c002004410020043f043e043c043e0449044c044e0020004100630072006f00620061007400200438002000410064006f00620065002000520065006100640065007200200035002e00300020043800200431043e043b043504350020043f043e04370434043d043804450020043204350440044104380439002e> /SLV <FEFF005400650020006e006100730074006100760069007400760065002000750070006f0072006100620069007400650020007a00610020007500730074007600610072006a0061006e006a006500200064006f006b0075006d0065006e0074006f0076002000410064006f006200650020005000440046002c0020007000720069006d00650072006e006900680020007a00610020007a0061006e00650073006c006a00690076006f0020006f0067006c00650064006f00760061006e006a006500200069006e0020007400690073006b0061006e006a006500200070006f0073006c006f0076006e0069006800200064006f006b0075006d0065006e0074006f0076002e00200020005500730074007600610072006a0065006e006500200064006f006b0075006d0065006e0074006500200050004400460020006a00650020006d006f0067006f010d00650020006f0064007000720065007400690020007a0020004100630072006f00620061007400200069006e002000410064006f00620065002000520065006100640065007200200035002e003000200069006e0020006e006f00760065006a01610069006d002e> /SUO <FEFF004b00e40079007400e40020006e00e40069007400e4002000610073006500740075006b007300690061002c0020006b0075006e0020006c0075006f0074002000410064006f0062006500200050004400460020002d0064006f006b0075006d0065006e007400740065006a0061002c0020006a006f0074006b006100200073006f0070006900760061007400200079007200690074007900730061007300690061006b00690072006a006f006a0065006e0020006c0075006f00740065007400740061007600610061006e0020006e00e400790074007400e4006d0069007300650065006e0020006a0061002000740075006c006f007300740061006d0069007300650065006e002e0020004c0075006f0064007500740020005000440046002d0064006f006b0075006d0065006e00740069007400200076006f0069006400610061006e0020006100760061007400610020004100630072006f0062006100740069006c006c00610020006a0061002000410064006f00620065002000520065006100640065007200200035002e0030003a006c006c00610020006a006100200075007500640065006d006d0069006c006c0061002e> /SVE <FEFF0041006e007600e4006e00640020006400650020006800e4007200200069006e0073007400e4006c006c006e0069006e006700610072006e00610020006f006d002000640075002000760069006c006c00200073006b006100700061002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400200073006f006d00200070006100730073006100720020006600f60072002000740069006c006c006600f60072006c00690074006c006900670020007600690073006e0069006e00670020006f006300680020007500740073006b007200690066007400650072002000610076002000610066006600e4007200730064006f006b0075006d0065006e0074002e002000200053006b006100700061006400650020005000440046002d0064006f006b0075006d0065006e00740020006b0061006e002000f600700070006e00610073002000690020004100630072006f0062006100740020006f00630068002000410064006f00620065002000520065006100640065007200200035002e00300020006f00630068002000730065006e006100720065002e> /TUR <FEFF005400690063006100720069002000620065006c00670065006c006500720069006e0020006700fc00760065006e0069006c0069007200200062006900720020015f0065006b0069006c006400650020006700f6007200fc006e007400fc006c0065006e006d006500730069002000760065002000790061007a0064013100720131006c006d006100730131006e006100200075007900670075006e002000410064006f006200650020005000440046002000620065006c00670065006c0065007200690020006f006c0075015f007400750072006d0061006b0020006900e70069006e00200062007500200061007900610072006c0061007201310020006b0075006c006c0061006e0131006e002e00200020004f006c0075015f0074007500720075006c0061006e0020005000440046002000620065006c00670065006c0065007200690020004100630072006f006200610074002000760065002000410064006f00620065002000520065006100640065007200200035002e003000200076006500200073006f006e0072006100730131006e00640061006b00690020007300fc007200fc006d006c00650072006c00650020006100e70131006c006100620069006c00690072002e> /ENU (Use these settings to create Adobe PDF documents suitable for reliable viewing and printing of business documents. Created PDF documents can be opened with Acrobat and Adobe Reader 5.0 and later.)>> setdistillerparams /HWResolution [600 600] /PageSize [612.000 792.000]>> setpagedevice</s>
<s>Author GuidelineICERIE_ MIE_XYZ Proceedings of the International Conference on Engineering Research, Innovation and Education 2017 ICERIE 2017, 13 15 January, SUST, Sylhet, Bangladesh Bangla Word Clustering Based on Tri-gram, 4-gram and 5-gram Language Model Dipaloke Saha1,*, Md Saddam Hossain, MD. Saiful Islamand Sabir IsmailDepartment Of Computer Science & Engineering, Shahjalal University Of Science And Technology, Sylhet, Bangladesh. dipsustcse12@gmail.com, mshossaincse@gmail.com , saiful-cse@sust.edu , sabir.ismail01@gmail.com Keywords:  Word Cluster, Natural Language Processing, Machine Learning, N-gram Model, Term Frequency (tf).  SUST, ICERIE. Abstract: — In this paper, we describe a research method that generates Bangla word clusters on the basis of relating to meaning in language and contextual similarity. The importance of word clustering is in parts of speech (POS) tagging, word sense disambiguation, text classification, recommender system, spell checker, grammar checker, knowledge discover and for many others Natural Language Processing (NLP) applications. In the history of word clustering, English and some other languages have already implemented some methods on word clustering efficiently. But due to lack of the resources, word clustering in Bangla has not been still implemented efficiently. Presently, it’s implementation is in the beginning stage. In some research of word clustering in English based on preceding and next five words of a key word they found an efficient result. Now, we are trying to implement the tri-gram, 4-gram and 5-gram model of word clustering for Bangla to observe which one is the best among them. We have started our research with quite a large corpus of approximate 1 lakh Bangla words. We are using a machine learning technique in this research. We will generate word clusters and analyze the clusters by testing some different threshold values. 1. INTRODUCTION Though Bangla is a widely spoken language, it has lack of resources in its research field. Recently, a new research dimension in Bangla is added called word clustering. In this paper, the research of word clustering for Bangla language is trying to be extended. For this, a large Bangla corpus containing 97,971 individual words is compiled to generate the word clusters. In this paper, an unsupervised machine learning technique and a method are proposed to cluster Bangla words on the basis of similarity in semantics and contexts. In language processing word cluster has a wide range of applications. POS tag is one of them. Same clustered words usually contain the same POS tag. Word clustering can produce suggestions for an inaccurately typed word which is very much helpful for spell checker. Word sense disambiguation, sentence structure with grammatical mistakes can also be solvable using clustered words. In the case of recommender system if related products of the same category are clustered in the same group, more feasible suggestion can be produced. This type of work is also useful for Bangla search engine to find the appropriate content. So, there is a huge importance of word clustering in the field of natural language processing. dipsustcse12@gmail.com mailto:%20dipsustcse12@gmail.commailto:mshossaincse@gmail.commailto:saiful-cse@sust.edumailto:sabir.ismail01@gmail.commailto:dipsustcse12@gmail.com2. RELATED WORK In Bangla the implementation of word clustering is in the neophyte stage. A previous</s>
<s>work on Bangla word clustering exists in which an unsupervised machine learning technique is used to implement the bigram model by Sabir Ismail and M. Shahidur Rahman. In many other languages different types of techniques are used for word clustering. Finch and Chater (1992) implemented bigram model for the calculation of weight matrix of a neural network. N-gram language model is used on word clustering in a research proposed by Brown, Desouza, Mercer, Peitra, Lai (1992). Another effort using n-gram model is introduced by Korkmaz (1997) in which a similarity function and greedy algorithm is used to group the words into same cluster. However, with the use of delete interpolation method by Mori, Nishimura and Itoh (1998) they got the better result than the Brown, Desouza’s method. This was done for Japanese and English language. Besides these, there exists quite a good number of researches of word clustering for some other languages like Russian, Arabic, Chinese etc. 3. PROBLEM DEFINITION Clustering is an unsupervised machine learning technique that does not require any type of rules or predefined conditions. Items which are much similar either in semantically or contextually are grouped in the same cluster and which are dissimilar are in different clusters. The introduced method in this problem is concentrating on two types of similarity such as semantics and contextual similarity. Consider the following four sentences: and are similar in semantic meaning in sentence 1 and 2 and there is similarity in and in sentence 3 and 4. Here, the theory of N-gram model is implemented . Probability distribution is used here to define n-th item in a sequence form previous or next (n-1) items. Tri-gram, 4and 5gram model is defined as size of 3, 4 and 5 of N-gram respectively. In this research, word clusters will be generated by implementing tri, 4and 5gram model. After finding the word clusters the most efficient model will be found out based on those clustering words. 4. METHODOLOGY Firstly, quite a large corpus of 97,971 individual words Wi is used in this research. Next, a list of previous three words of a specific word for tri-gram, four words for 4-gram, five words for 5-gram are prepared. Similarly, a list of next three words of a specific word for tri-gram, four words for 4-gram, five words for 5- gram are prepared. Next, similarity between a pair of words to be included in the same cluster based on preceding three words, four words and five words are determined as follows: In tri-gram for every pair of words Wi, Wj the number of matched preceding words from list list(Wi-3, Wi-2 , Wi-1) and list(Wj-3, Wj-2, Wj-1) P(Wi,Wj)=(Count(match(list(Wi-3,Wi-2,Wi-1),list(Wj-3,Wj-2,Wj-1)))/((Count(list(Wi-3,Wi-2,Wi- 1))+Count(list(Wj-3,Wj-2,Wj-1 ))) Similarly, calculation for the 4-gram model is: P(Wi,Wj) = (Count(match(list(Wi-4,Wi-3,Wi-2 ,Wi-1),list(Wj-4,Wj-3,Wj-2,Wj-1)))/((Count(list(Wi-4,Wi-3,Wi-2, Wi-1))+Count(list(Wj-4,Wj-3,Wj-2,Wj-1))) 3 | Dipaloke et. al., I C E R I E 2 0 1 7 and for 5-gram model is: P(Wi,Wj)=(Count(match(list(Wi-5,Wi-4,Wi-3,Wi-2,Wi-1),list(Wj-5,Wj-4,Wj-3,Wj-2,Wj-1)))/((Count(list(Wi- 5,Wi-4,Wi-3,Wi-2, Wi-1))+Count(list(Wj-5,Wj-4,Wj-3,Wj-2,Wj-1))) Again similarly, between a pair of words to be included in the same cluster based on following three, four and five words</s>
<s>are determined as follows, For tri-gram, P(Wi,Wj)=(Count(match(list(Wi+3,Wi+2,Wi+1),list(Wj+3,Wj+2,Wj+1)))/((Count(list(Wi+3,Wi+2,Wi+1)) +Count(list(Wj+3, Wj+2, Wj+1))) Similarly, calculation for the 4-gram model is: P(Wi,Wj)=(Count(match(list(Wi+4,Wi+3,Wi+2,Wi+1),list(Wj+4,Wj+3,Wj+2,Wj+1)))/((Count(list(Wi+4,Wi+3 ,Wi+2, Wi+1))+Count(list(Wj+4,Wj+3,Wj+2,Wj+1))) and for 5-gram model is: P(Wi,Wj)=(Count(match(list(Wi+5,Wi+4,Wi+3,Wi+2,Wi+1),list(Wj+5,Wj+4,Wj+3,Wj+2,Wj+1)))/((Count(list (Wi+5,Wi+4,Wi+3,Wi+2,Wi+1))+Count(list(Wj+5,Wj+4,Wj+3,Wj+2,Wj+1))) If the above equations of a particular model yield values greater than a predefined threshold value they are grouped into the same cluster for that model. For example, to implement the tri-gram model some of the following phrases are : For word preceding three words list: list(Wi-3, Wi-2, Wi-1) = Count (list(Wi-3, Wi-2, Wi-1)) = 3 For word Following three words list : list(Wi+3, Wi+2, Wi+1) = Count (list(Wi+3, Wi+2, Wi+1)) = 3 For word preceding three words list: list(Wj-3, Wj-2, Wj-1) = Count (list(Wj-3, Wj-2, Wj-1)) = 3 For word following three words list: list(Wj+3, Wj+2, Wj+1) = Count(list(Wj+3, Wj+2, Wj+1)) = 3 Number of matched words for word with based on preceding three words : Count(match(list(Wi-3, Wi-2 , Wi-1),list(Wj-3, Wj-2 , Wj-1))) = 2 Count (list(Wi-3, Wi-2, Wi-1)) + Count (list(Wj-3, Wj-2, Wj-1)) = 6 Similarity between words and based on preceding three words: P(Wi, Wj) = 2/6 = 0.33 Number of matched words for word and based on following three words: Count(match(list(Wi+3, Wi+2 , Wi+1),list(Wj+3, Wj+2 , Wj+1))) = 2 Count(list(Wi+3, Wi+2, Wi+1)) + Count(list(Wj+3, Wj+2, Wj+1)) = 6 Similarity between words and based on following three words: P(Wi, Wj) = 2/6 = 0.33 Similarly, 4and 5gram model can be implemented in the same way. The value of similarity between words with when considering preceding three words is 0.33 and considering following three words it is also 0.33. Different types of threshold values are experimented and best result is earned with 0.20. Both words are grouped in the same cluster when all the probability scores are greater than this threshold value. 5. RESULT ANALYSIS In the tri, 4and 5gram model we derive 2215, 3327 and 5730 word clusters in total respectively. Some clusters randomly from each of the model are represented here in the following tables: Table 1 Word Cluster for tri-gram model 5 | Dipaloke et. al., I C E R I E 2 0 1 7 Table 2 Word Cluster for 4gram model Table 3 Word Cluster for 5gram Model After analyzing the word clusters of all the three models we find poor similarity in some word clusters such as 266 for tri-gram, 300 for 4gram and 360 for 5gram. So, we find 1949, 3027 and 5370 clusters in strong similarity for the tri, 4and 5gram model respectively. So, the accuracy for strong similarity in Tri-gram :- 88% gram :- 91% gram :- 93% So, it is observed that 4gram is is better than tri-gram and 5gram is the best in all of them. 6. CONCLUSION Word clustering is important for various types of purpose for any language. For this reason in Bangla, tri- gram, 4gram and 5gram model is implemented here to proceed the previous work on word clustering. The analysis and result presented above on quite a large Bangla corpus has helped us to find the</s>
<s>efficiency among the three mentioned models for word clustering. On the basis of the observation, it can be said that better efficiency is in the higher orders than the preceding orders of the N-gram model. REFERENCES Top 10 most spoken languages in the world, http://listverse.com/2008/06/26/top-10-most-spoken-languages- in-the-world/ Unsupervised machine learning, http://www.aihorizon.com/essays/generalai/supervised_unsupervised_machine_learning.htm Y Goldberg.“Task-specific word-clustering for Part-of-Speechtagging”.arXiv preprint arXiv:1205.4298, 2012. H A Sánchez, A P Porrata and R B Llavori. “Word sense disambiguation based on word sense clustering”. Advances in Artificial Intelligence,Springer Berlin Heidelberg, 2006. P: 472-481. Sabir Ismail, M. Shahidur Rahman. https://www.researchgate.net/publication/261551758_Bangla_Word_Cl ustering_Based_on_N-gram_Language_Model , in press. S Finch and N Chater. “Automatic methods for finding linguisticcategories”. In Igor Alexander and John Taylor, editors,ArtificialNeural Networks, Volume 2. Elsevier Science Publishers, 1992. P F Brown, P V Desouza, R L Mercer, V J D Pietra, V J Della. and J CLai. “Class-based N-gram Models of Natural Language”.Computationalinguistics, 18 No: 4, 1992, P: 467-479. EEKorkmaz. “A method for improving automatic wordcategorization”. Doctoral dissertation, Middle East Technical University, 1997, in press. S Mori, M Nishimura and N Itoh. “Word clustering for a word bi - gramModel”. International Conference on Spoken Language Processing, 1998, in press. Clustering – Introduction, http://home.deib.polimi.it/ matteucc/Clustering/tutorial_html. Clustering – Introduction, “http://www.stanford.edu/class/cs345a/slides/12-clustering.pdf “.Stanford University- Clustering. Similarity in semantics and contexts, http://www.ilc.cnr.it/EAGLES96/rep2/node37.html http://listverse.com/2008/06/26/top-10-most-spoken-languages-in-the-world/http://listverse.com/2008/06/26/top-10-most-spoken-languages-in-the-world/http://listverse.com/2008/06/26/top-10-most-spoken-languages-in-the-world/http://www.aihorizon.com/essays/generalai/supervised_unsupervised_machine_learning.htmhttp://www.aihorizon.com/essays/generalai/supervised_unsupervised_machine_learning.htmhttps://www.researchgate.net/publication/261551758_Bangla_Word_Clustering_Based_on_N-gram_Language_Modelhttps://www.researchgate.net/publication/261551758_Bangla_Word_Clustering_Based_on_N-gram_Language_Modelhttp://home.deib.polimi.it/%20matteucc/Clustering/tutorial_htmlhttp://home.deib.polimi.it/%20matteucc/Clustering/tutorial_htmlhttp://www.stanford.edu/class/cs345a/slides/12-clustering.pdfhttp://www.stanford.edu/class/cs345a/slides/12-clustering.pdfhttp://www.ilc.cnr.it/EAGLES96/rep2/node37.html</s>
<s>2019 22nd International Conference on Computer and InformationTechnology (ICCIT), 18-20 December, 2019Authorship Attribution in Bangla literature usingCharacter-level CNNAisha KhatunDepartment of ComputerScience and EngineeringShahjalal University ofScience and TechnologySylhet, BangladeshEmail: aysha.kamal7@gmail.comAnisur RahmanDepartment of ComputerScience and EngineeringShahjalal University ofScience and TechnologySylhet, BangladeshEmail: emailforanis@gmail.comMd. Saiful IslamDepartment of ComputerScience and EngineeringShahjalal University ofScience and TechnologySylhet, BangladeshEmail: saif.acm@gmail.comMarium-E-JannatDepartment of ComputerScience and EngineeringShahjalal University ofScience and TechnologySylhet, BangladeshEmail: jannat-cse@sust.eduAbstract—Characters are the smallest unit of text that canextract stylometric signals to determine the author of a text.In this paper, we investigate the effectiveness of character-levelsignals in Authorship Attribution of Bangla Literature and showthat the results are promising but improvable. The time andmemory efficiency of the proposed model is much higher thanthe word level counterparts but accuracy is 2-5% less thanthe best performing word-level models. Comparison of variousword-based models is performed and shown that the proposedmodel performs increasingly better with larger datasets. We alsoanalyze the effect of pre-training character embedding of diverseBangla character set in authorship attribution. It is seen that theperformance is improved by up to 10% on pre-training. We used2 datasets from 6 to 14 authors, balancing them before trainingand compare the results.Keywords—Character Level, Character Embedding, Bangla,Authorship Attribution, Deep LearningI. INTRODUCTIONAuthorship attribution is generally concerned with the iden-tification of the original author of a given text from a set ofgiven authors. It has a wide range of applications includingplagiarism detection, forensic linguistics, etc. Each authorhas a distinctive writing style that is exploited by statisticalanalysis to detect the author.However, in Bangla language, the amount of work donein this area is not very rich despite being one of themost spoken languages. In traditional methods, texts arerepresented using independent features such as lexical n-gramor frequency-based representation. In this approach, wordsof similar context are likely to be represented in differentvector space as the features are independent. So, the semanticvalues of the words might be lost, which is problematic.Word embedding, also generally known as distributed termrepresentations, offers a solution to this problem by encodingsemantic similarity from their co-occurrences. Chowdhury[1] experimented with the effectiveness of word embeddingin authorship attribution for Bangla language for variousarchitectures.Another type of embedding, which we tried to analyzein this paper is character embedding. Character CNN wasfirst introduced by Zhang [2] for the text classificationtask. Through the empirical experiment of Sebastian [3]and Jozefowicz [4], character level NLP has been provento be very promising in various ways. Although it mayseem that character on its own does not have any semanticvalue, Radford [5] illustrates that character-level modelscan capture the semantic properties of text. Character levelmodels are also better at handling out-of-vocabulary words,misspelling, etc and provide an open vocabulary. Anothermajor advantage is that it reduces the dimension to aslow as 16, unlike word embedding where the dimensioncan increase up to 300 while the vocabulary is also huge.So, character embedding removes the bottleneck in trainingtasks and gives huge advantages on computational complexity.Our approach in this paper was to investigate how characterembedding performs in the task of Authorship Attributionin Bangla language. Bangla Language has numerous wordswith joint letters which can be written in a few</s>
<s>differentforms. Moreover, there are some words with the same meaningbut slightly different spelling. These inconsistencies are notrecognized by word-level models but character-level modelscan capture and relate words of this kind, making suchmodels more appropriate for Bangla language. Comparisonof character embedding with word embedding is discussedaccording to the findings. Experiments with and without pre-trained embedding layers have also been done to show theeffectiveness of information captured in the embeddings. Noprevious work, analysis or investigation has yet published onthe effect of character embedding in Authorship Attributionof Bangla Literature as of our knowledge to date. This paperfollows the structure provided below:• Related Works - Extensive background study on someworks relevant to this paper are provided in this section.• Corpus - The dataset used in our experiment is describedin this section.• Methodology - The proposed architecture for our char-978-1-7281-5842-6/19/$31.00 ©2019 IEEEacter embedding model along with the strategies usedduring the training phase of the neural networks aredescribed in depth.• Experiments - Describes the evaluation process and themodel setup for comparison.• Results and Discussion - Our findings along with resultsand possible reasons are presented in this part.• Conclusion - In the last section, some recommendationsand scope for future research on this field are mentioned.II. RELATED WORKSA. On Authorship AttributionAuthorship Attribution has been a topic of importantresearch for a long time. With increased anonymity on theinternet and easy fraud, authorship attribution of writings hasbecome crucial. For authorship attribution, work on varyingdegrees of feature selection [6], including advanced featuressuch as local histograms [7]. Naive similarity-based models[8], SVMs [9] have been explored. Semi-supervised approachto authorship attribution was also taken [10]. SOTA wasachieved by Ruder [3] using character-level and multi-channelCNN.Compared to other works, very few works have been donein Bangla language, lacking any sort of high benchmarksuntil very recently. Das and Mitra [11] worked with a reallysmall dataset of 36 documents and 3 authors to perform uni-gram and bi-gram feature-based classification. Chakraborty[12] worked with SVMs on 3 authors to achieve up to 84%accuracy. Shanta Phani also attempted to attribute 3 authorswith machine learning [13]. P. Das, R. Tasmim, and S. Ismailused 4 authors of current times and hand-drawn featuressuch as word frequency, type-token ratio, number of variousPOS, word/sentence lengths etc [14]. 90.67% was achievedby Hossain and Rahman by using multiple features alongwith cosine similarity [15]. Pal, Siddika, and Ismail achieved90.74% accuracy with 6 authors using SVM on one feature[16]. Multi-layered perceptrons were employed by Phani,Lahiri, and Biswas [17]. Impressive results were achieved veryrecently by [1] using various word embeddings on a 6 authordataset. They demonstrated the effects of various architecturesand word embeddings on authorship attribution and concludedthat fastTexts skip-gram used with CNN tends to beat allother models in terms of accuracy. No work has been doneon the character level classification task as of knowledge inBangla literature. The effects of Bangla alphabet complexityand language formulation on architectural design and characterembedding learning remains largely untouched.B. On EmbeddingEmbeddings are effectively mappings from various entities(character, word, sentence, etc) to continuous vector spacesin high dimensions. The relation among the numericalrepresentations gives a semantic, syntactic and</s>
<s>morphologicalmeaning of the entities. These meanings are leveraged bymachine learning techniques to find patterns in texts and thusperform various tasks such as classification.1) Word Embedding: Representing words in continuousvector spaces is considered as one of the breakthroughsof NLP. Word embeddings are learned in the form of anembedding layer or separately in an unsupervised manner.Among the unsupervised techniques include ContinuousBag-of-Words(CBOW) and Skip-Gram models famouslyimplemented by Word2Vec and fastText. Also, there areco-occurrence statistical methods such as Glove. Santos [18]used word embeddings with convolutional models showingsignificant improvements over baseline methods. Wordembeddings have been used to improve the performance ofsentiment analysis [19]. Often pre-trained embeddings areused or are learned for specific tasks such as tree-structuredlong short-term memory networks [20] and Multi-perspectivesentence similarity modeling [21]. Although words startedto be used as units of text, various works have started tobreak down words and work at subword and character levels.Wieting [22] creates subword embedding from counts ofcharacter n-grams.2) Character Embedding: Character Level embeddingsare used in various ways, either by themselves or to produceembeddings of higher levels e.g for words. Characterembeddings have been employed in POS tagging [23],language modelling [23] and dependency parsing [24].Character-RNN were used for machine translation, forrepresenting words [25] or to generate character leveltranslations [26]. Pure Character level classification wasfirst explored using CNN architecture [2]. Jozefowicz [4]shows that a character-level language model can significantlyoutperform state of the art models. Their best performingmodel combines an LSTM with CNN input over thecharacters. Besides using either just word or characterembeddings, ideas of combining them also have beenintroduced [27]. Attempts to learn character embedding andserve as pre-trained have also been explored [28].III. CORPUSBecause of the scarcity for the standard dataset in authorshipattribution, we made a custom web crawler to parse the dataon our own. We collected writings from an online Banglae-library containing writings(e.g., novels, story, series, etc.)of different authors. Table I shows the details of our dataset.Our dataset is larger compared to the previously worked ondatasets for Bangla as mentioned in section II with 13.4+million words. The dataset was equally partitioned with eachdocument having the same length of 750 words. Varioussubsets of authors were chosen and the dataset was truncatedto each author having the same number of samples.The dataset from the paper [1] was also used. This datasetconsists of 6 authors with 350 sample texts per author andtotal word count of 2.3+ million.TABLE ICORPUS DETAILSAuthor Word count Unique wordscandidate 01 351750 44477candidate 02 421500 62485candidate 03 825000 53163candidate 04 666000 84888candidate 05 636750 67579candidate 06 984000 78717candidate 07 944250 89956candidate 08 3388500 161893candidate 09 357000 43864candidate 10 786000 69182candidate 11 1056000 69648candidate 12 1472250 109230candidate 13 698250 76071candidate 14 581250 84311For pre-training our model, we used another large corpusof Bangla Newspaper articles based on 6 topics. The topicswere accident, crime, education, entertainment, environment,and sports. The dataset consists of 10564543 tokens.IV. METHODOLOGYA. Proposed ArchitectureCharacter-level CNN can sufficiently replace words forclassifications [2]. This means CNN does not require thesyntactic or semantic structure of a language, which makessuch approaches effectively independent of language as thenumber of characters is limited. To this end,</s>
<s>CNN was usedin this paper to perform the task of author attribution. Anelaborate set of experiments were performed on 3 differentdatasets to conclude with an architecture that successfullyextracts the character level features of any sample text. Thesame architecture was used to prepare the pre-trained characterembeddings for classification tasks. The model is a deep neuralnetwork starting with 4 convolutional layers, each followedwith a maxpool layer of kernel size 3. As standardized incomputer vision, for the convolutional layers, the number offilters increases while decreasing the kernel size at each layer.The kernel sizes are respectively 7,3,1 and 1. The number offilters is 64,128,256 and 256. Beneath all is an embeddinglayer where each character is represented as a vector oflength ‖V ‖, i.e, the alphabet size. The convolutional layersare stacked with a fully connected layer of 512 activationnodes, activation function ReLU and dropout. Finally, anoutput layer with softmax is used to provide the classificationprobabilities. For optimization Adam optimizer is used alongwith categorical cross-entropy as the loss function.B. Character EmbeddingCharacter embedding aims to turn characters into meaning-ful numerical representations in the form of vectors. Thesevectors may represent the correlation of different characters,or even correlation of groups of characters together i.e. words,sentences, documents, etc. This concept can be leveraged touse character embeddings to fit misspelled words, rare or newwords, slangs or emoticons. They can also easily representwords with variations such as drive, driving, drives, etc. Thereis no more bottleneck for out of vocabulary words. Thecharacter set can be used to make any word, even if it is outof vocabulary, in contrast to word embeddings which simplyignored them, or had weak representations for rare words. Thisway character embeddings increase generalization comparedto words. Another significant improvement is the vocabularysize. Instead of having a very large vocabulary of words,character embeddings have a fixed number of characters whichis significantly smaller, therefore reduces model complexityand the number of parameters by a significant amount. Fur-thermore, they can be represented with a small vector size (e.g16) and still be significantly informative as opposed to wordembeddings which require at least 100-300 size vectors for adecent model. The simplest way to represent a character is touse a one-hot encoding. This requires the vector size to be thesize of the alphabet. We used a one-hot encoding as a baselinefor comparison of pre-trained embeddings. Otherwise, one canrandomly initialize the vectors, where the vectors can be ofany size as small as 16 to as big as 300. This becomes ahyperparameter for tuning.C. Training the modelThe alphabet size, and therefore the embedding vector sizeis 253. Among the 253 different characters are the Englishletters(capital and small) and digits, Bangla letters and digits,Bangla vowel symbols, and various other punctuation andsymbols. For comparative training, two sets of embeddingswere created for the character set. First is one-hot encoding,and the other is pre-trained embeddings. The training wasdone in two phases as stated below:1) Pre-training Embedding: To learn characterembeddings, the architecture mentioned above was usedfor classification of the news dataset as mentioned insection III. This is in contrast to the usual ways of learningembeddings. No</s>
<s>separate model was used [28] to learn theembeddings. Instead, already available classification task ona marginally large dataset learns character embeddings forits purposes. These embeddings can be used as initializationfor the author attribution task, which has a smaller datasetcompared to the former, giving it an initial boost. The modelwas trained with a learning rate of 0.001 and decay of 0.0001.The maximum length of each text sample was set as 1000and batch size as 80. A dropout rate of 0.5 was used in thefully connected layer to prevent over-fitting. The embeddingsthen learned to have an understanding of how the Banglalanguage works and provide a meaningful initialization forany classification tasks. They were then extracted and usedfor the task of authorship attribution.2) Performing Classification: To perform the main task ofauthor attribution and comparison, this training phase wasperformed twice with each type of embeddings mentionedabove, i.e one-hot and pre-trained. The fully connected layerwas given a dropout probability of 0.7 and trained with batchsize 128 and the maximum length of each text was set tobe 3000 characters. Everything else was kept similar. Theclassification was carried out with 2 author attribution datasets,one with 6 authors [1] and our dataset with maximum 14authors. The larger dataset was trained with 6,8,10,12 and14 authors to analyze the effects of increasing classes on theproposed model.V. EXPERIMENTSWe evaluate the performance of the proposed architecturein terms of accuracy, with and without pre-training characterlevel embedding and comparing them on the held-out dataset.We also try to infer how the character-level model compareswith the word level models. All models are compared forthe increasing number of authors(classes) on the corpusmentioned to assess the quality of the models. To keep thedataset balanced, the number of samples per class weretruncated to the minimum among the classes. We proposea model for word-level classification mostly similar to ourChar-CNN model. The model used for performance analysisis as follows:A. Word Embedding ModelThis model has a close resemblance to the proposed Char-CNN model except for a few differences to tune with theword level version of the classification. The model has 2convolutional layers with the kernel sizes 7,3 and numberof filters are 128,256 respectively for each layer. Each layerfollowed by a maxpool layer. The model is initialized withpre-trained word embeddings from word2vec and fastText,both CBOW and skip-gram versions. The convolutional layersare stacked with an LSTM layer of 100 neurons and a fullyconnected layer of 512 activation nodes both with dropout toprevent overfitting. Finally, a softmax layer is used to providethe classification probabilities. It is trained for 10 epochs witha learning rate of 0.001 with Adam optimizer, the batch sizeis 32 and 750 words per sample are used as input to models.All the word level models have a vocabulary size of 60000and word embedding vector of size 300.VI. RESULTS AND DISCUSSIONThe accuracies achieved(in percents) on the test set of thedatasets, with pre-trained embeddings for both word and char-acter levels are summarized in Table II. Because the datasetswere balanced, the comparison of accuracies is sufficient.TABLE IIPERFORMANCE COMPARISON OF DIFFERENT MODELS WITH PRE-TRAINEDEMBEDDING#of Authors 6 [1] 6 8</s>
<s>10 12 14samples/author 350 1100 931 849 562 469Char-CNN 83 96 92 86 75 69W2V(CBOW) 65.3 97 82.8 83.3 76.4 71.8fastText(CBOW) 65 73 58 35.7 37.31 40.3W2V(Skip) 79 94 91.1 85.4 82.2 78.6fastText(Skip) 86 98 95.2 86.35 80.9 81.2Accuracy comparison(in percents) of the proposed modelwith and without pre-trained character embeddings are sum-marized in Table III.TABLE IIIPRETRAINED VS NON-PRETRAINED COMPARISON#of Authors 6 [1] 6 8 10 12 14#of samples/class 350 1100 931 849 562 469Pretrained Embedding 83 96 92 86 75 69Not pretrained 71 95 82 83 66 59.5Fig. 1. Accuracy of various models with increasing number of samples.From the accuracy comparisons shown in Table II we seethat Skip-gram implemented by fastText performs well in thegiven datasets. So we can infer that subword level classifica-tion tends to extract a good amount of meaning informationand styles from the text. On the other hand, the word2vecmodels, which use entire words have worse performance.Character level model performs reasonably well in competitionwith subword level as long as the dataset is big enough. Whenthe number of authors increased, the number of samples perauthor decreased making it difficult for the character-levelmodel to collect enough information. With larger datasets, thismodel will be able to perform significantly better [2]. Thiscan be illustrated from Figure 1 that with a larger numberof samples, the Char-CNN model raises steeply and performscompetitively with the other models. In terms of the numberof parameters, character level model is much superior to itsword-level counterparts. The embedding vectors for the wordlevel models is of size embedding vector ∗ vocabulary size.i.e. 300 * 60000. On the other hand, the character embeddingmatrix is of size 253*253 given that we initially used one-hotvectors. This size can also be reduced to as low as 253*16 aswere done in some research [4]. Another thing to consider isthe time it takes to train the models. For the word embeddingmodels, a pure CNN does not work satisfactorily, so an LSTMlayer had to be added to add sequential information in themodel. This improves accuracy with the cost of taking moretime to train, around 15-20 minutes. On the other hand, thecharacter-level model works significantly well with only usingconvolutional layers taking less than 2 minutes to train. Thiseffect of training time become largely magnified on large-scale cases, making the word-level model unfit for light-weightdevices. As stated in the paper [2], ConvNets with characterembedding can completely replace words and work even with-out any semantic meanings. Which means that convolutionallayers can extract whatever information necessary for authorattribution, given enough data.To illustrate the need of pre-trained character embeddings, wesee from III that using a pre-trained embedding increases theaccuracy across datasets and the different number of authors,regardless of the amount of data for each author. Whichshows that these naively learned embeddings contain valuableinformation that can be easily applied to various tasks ofthe language, including author attribution, and increase theperformance a few degrees. These numerical representationsof character contain information about morphology and thesyntax of the language among other things. Therefore suchembedding can be learned from any task and applied to othertasks as</s>
<s>a form of transfer learning, given the alphabet remainsthe same.VII. CONCLUSIONSo far no work has been done to evaluate the usefulnessof character embeddings for classification task in Banglalanguage. We attempt to fill this gap and compare characterembeddings with word embeddings showing that character em-beddings perform almost as good as the best word embeddingmodel. But besides accuracy, character level classification hasa greater hand in terms of memory, time and number of param-eters. Considering the small size of our datasets, we hope tohave improved performance with larger datasets, as is the casefor character level ConvNets [2]. Besides such network alsowork better with non-curated texts, which are hard for word-level embeddings to capture, thus more applicable to real-lifescenarios. Furthermore, we analyzed the importance of pre-trained character embedding for author attribution and showedthat pre-training can result in better performances. Since verylarge corpus is not available in Bangla language yet, we mustcome up with solutions that tackle attribution tasks sufficientlywell even with little data. Therefore our future works includethe combination of both character and word level embeddingsto perform attribution task, in an attempt to combine thepower of both types of embeddings. More advanced levelsof transfer learning can also be performed by using languagemodels in place of embeddings before classification. Languagemodels and embeddings can also be combined to give greatergeneralization for Bangla language.REFERENCES[1] H. A. Chowdhury, M. A. H. Imon, and M. S. Islam, “A comparativeanalysis of word embedding representations in authorship attribution ofbengali literature,” 2018.[2] X. Zhang, J. Zhao, and Y. LeCun, “Character-level convolutional net-works for text classification,” 2015.[3] S. Ruder, P. Ghaffari, and J. G. Breslin, “Character-level and multi-channel convolutional neural networks for large-scale authorship attri-bution,” arXiv preprint arXiv:1609.06686, 2016.[4] R. Jozefowicz, O. Vinyals, M. Schuster, N. Shazeer, and Y. Wu, “Explor-ing the limits of language modeling,” arXiv preprint arXiv:1602.02410,2016.[5] A. Radford, R. Jozefowicz, and I. Sutskever, “Learning to generatereviews and discovering sentiment,” arXiv preprint arXiv:1704.01444,2017.[6] E. Stamatatos, “A survey of modern authorship attribution methods,”Journal of the American Society for information Science and Technology,2009.[7] H. J. Escalante, T. Solorio, and M. Montes-y Gómez, “Local histogramsof character n-grams for authorship attribution,” 2011.[8] M. Koppel, J. Schler, and S. Argamon, “Authorship attribution in thewild,” Language Resources and Evaluation, 2011.[9] A. Narayanan, H. Paskov, N. Z. Gong, J. Bethencourt, E. Stefanov,E. C. R. Shin, and D. Song, “On the feasibility of internet-scale authoridentification,” 2012.[10] J. A. Nasir, N. Görnitz, and U. Brefeld, “An off-the-shelf approach toauthorship attribution,” 2014.[11] S. Das and P. Mitra, “Author identification in bengali literary works,”2011.[12] T. Chakraborty, “Authorship identification in bengali literature: a com-parative analysis,” arXiv preprint arXiv:1208.6268, 2012.[13] S. Phani, S. Lahiri, and A. Biswas, “Authorship attribution in bengalilanguage,” 2015.[14] P. Das, R. Tasmim, and S. Ismail, “An experimental study of stylometryin bangla literature,” 2015.[15] M. T. Hossain, M. M. Rahman, S. Ismail, and M. S. Islam, “Astylometric analysis on bengali literature for authorship attribution,”2017.[16] U. Pal, A. S. Nipu, and S. Ismail, “A machine learning approach forstylometric analysis of bangla literature,” 2017.[17] S. Phani, S. Lahiri, and A. Biswas, “A machine</s>
<s>learning approach forauthorship attribution for bengali blogs,” 2016.[18] I. Santos, N. Nedjah, and L. de Macedo Mourelle, “Sentiment analysisusing convolutional neural network with fasttext embeddings,” 2017.[19] E. Rudkowsky, M. Haselmayer, M. Wastian, M. Jenny, Š. Emrich, andM. Sedlmair, “More than bags of words: Sentiment analysis with wordembeddings,” Communication Methods and Measures, 2018.[20] K. S. Tai, R. Socher, and C. D. Manning, “Improved semantic represen-tations from tree-structured long short-term memory networks,” arXivpreprint arXiv:1503.00075, 2015.[21] H. He, K. Gimpel, and J. Lin, “Multi-perspective sentence similaritymodeling with convolutional neural networks,” 2015.[22] J. Wieting, M. Bansal, K. Gimpel, and K. Livescu, “Charagram: Em-bedding words and sentences via character n-grams,” arXiv preprintarXiv:1607.02789, 2016.[23] W. Ling, T. Luı́s, L. Marujo, R. F. Astudillo, S. Amir, C. Dyer, A. W.Black, and I. Trancoso, “Finding function in form: Compositional char-acter models for open vocabulary word representation,” arXiv preprintarXiv:1508.02096, 2015.[24] M. Ballesteros, C. Dyer, and N. A. Smith, “Improved transition-basedparsing by modeling characters instead of words with lstms,” arXivpreprint arXiv:1508.00657, 2015.[25] M.-T. Luong and C. D. Manning, “Achieving open vocabulary neuralmachine translation with hybrid word-character models,” arXiv preprintarXiv:1604.00788, 2016.[26] J. Chung, K. Cho, and Y. Bengio, “A character-level decoder withoutexplicit segmentation for neural machine translation,” arXiv preprintarXiv:1603.06147, 2016.[27] D. Liang, W. Xu, and Y. Zhao, “Combining word-level and character-level representations for relation classification of informal text,” 2017.[28] K. Cao and M. Rei, “A joint model for word embedding and wordmorphology,” arXiv preprint arXiv:1606.02601, 2016. I Introduction II Related Works II-A On Authorship Attribution II-B On Embedding II-B1 Word Embedding II-B2 Character Embedding III Corpus IV Methodology IV-A Proposed Architecture IV-B Character Embedding IV-C Training the model IV-C1 Pre-training Embedding IV-C2 Performing Classification V Experiments V-A Word Embedding Model VI Results and Discussion VII Conclusion References</s>
<s>untitledSee discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/339676286Developing the Bangladeshi National Corpus-a Balanced and RepresentativeBangla CorpusConference Paper · December 2019DOI: 10.1109/STI47673.2019.9068005CITATIONSREADS4 authors, including:Some of the authors of this publication are also working on these related projects:Bangla Machine Translation View projectKhan Md Anwarus SalamIBM Japan19 PUBLICATIONS 54 CITATIONS SEE PROFILEMahfujur Rahman1 PUBLICATION 0 CITATIONS SEE PROFILEAll content following this page was uploaded by Khan Md Anwarus Salam on 04 March 2020.The user has requested enhancement of the downloaded file.https://www.researchgate.net/publication/339676286_Developing_the_Bangladeshi_National_Corpus-a_Balanced_and_Representative_Bangla_Corpus?enrichId=rgreq-717126330c502c9ead3dd6f84807c8b5-XXX&enrichSource=Y292ZXJQYWdlOzMzOTY3NjI4NjtBUzo4NjUyNzAxMDA5NDY5NDRAMTU4MzMwNzg3NjEzMA%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/339676286_Developing_the_Bangladeshi_National_Corpus-a_Balanced_and_Representative_Bangla_Corpus?enrichId=rgreq-717126330c502c9ead3dd6f84807c8b5-XXX&enrichSource=Y292ZXJQYWdlOzMzOTY3NjI4NjtBUzo4NjUyNzAxMDA5NDY5NDRAMTU4MzMwNzg3NjEzMA%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Bangla-Machine-Translation?enrichId=rgreq-717126330c502c9ead3dd6f84807c8b5-XXX&enrichSource=Y292ZXJQYWdlOzMzOTY3NjI4NjtBUzo4NjUyNzAxMDA5NDY5NDRAMTU4MzMwNzg3NjEzMA%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-717126330c502c9ead3dd6f84807c8b5-XXX&enrichSource=Y292ZXJQYWdlOzMzOTY3NjI4NjtBUzo4NjUyNzAxMDA5NDY5NDRAMTU4MzMwNzg3NjEzMA%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Khan_Md_Anwarus_Salam?enrichId=rgreq-717126330c502c9ead3dd6f84807c8b5-XXX&enrichSource=Y292ZXJQYWdlOzMzOTY3NjI4NjtBUzo4NjUyNzAxMDA5NDY5NDRAMTU4MzMwNzg3NjEzMA%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Khan_Md_Anwarus_Salam?enrichId=rgreq-717126330c502c9ead3dd6f84807c8b5-XXX&enrichSource=Y292ZXJQYWdlOzMzOTY3NjI4NjtBUzo4NjUyNzAxMDA5NDY5NDRAMTU4MzMwNzg3NjEzMA%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Khan_Md_Anwarus_Salam?enrichId=rgreq-717126330c502c9ead3dd6f84807c8b5-XXX&enrichSource=Y292ZXJQYWdlOzMzOTY3NjI4NjtBUzo4NjUyNzAxMDA5NDY5NDRAMTU4MzMwNzg3NjEzMA%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Mahfujur_Rahman14?enrichId=rgreq-717126330c502c9ead3dd6f84807c8b5-XXX&enrichSource=Y292ZXJQYWdlOzMzOTY3NjI4NjtBUzo4NjUyNzAxMDA5NDY5NDRAMTU4MzMwNzg3NjEzMA%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Mahfujur_Rahman14?enrichId=rgreq-717126330c502c9ead3dd6f84807c8b5-XXX&enrichSource=Y292ZXJQYWdlOzMzOTY3NjI4NjtBUzo4NjUyNzAxMDA5NDY5NDRAMTU4MzMwNzg3NjEzMA%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Mahfujur_Rahman14?enrichId=rgreq-717126330c502c9ead3dd6f84807c8b5-XXX&enrichSource=Y292ZXJQYWdlOzMzOTY3NjI4NjtBUzo4NjUyNzAxMDA5NDY5NDRAMTU4MzMwNzg3NjEzMA%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Khan_Md_Anwarus_Salam?enrichId=rgreq-717126330c502c9ead3dd6f84807c8b5-XXX&enrichSource=Y292ZXJQYWdlOzMzOTY3NjI4NjtBUzo4NjUyNzAxMDA5NDY5NDRAMTU4MzMwNzg3NjEzMA%3D%3D&el=1_x_10&_esc=publicationCoverPdf 2019 International Conference on Sustainable Technologies for Industry 4.0 (STI), 24-25 December, Dhaka 978-1-7281-6099-3/19/$31.00 ©2019 IEEE Developing the Bangladeshi National Corpus- a Balanced and Representative Bangla Corpus Khan Md Anwarus Salam*, Mahfujur Rahman°, Md Mahfuzus Salam Khanǂ Chief Technology Officer*, Research Coordinator°, Chief Executive Officerǂ Dream Door Soft Ltd. Dhaka, Bangladesh {anwar*, risad°, mahfuzǂ}@dreamdoorsoft.comAbstract— The need for a balanced, representative national scale corpus has been skyrocketing for the already ‘low resource’ tagged language-Bangla. Many sporadic empirical works have been done so far in the field of NLP and Computational Linguistics yet, and these are never enough. Moreover, none of these works can bear the best fruit without the help of a standard corpus. To address these issues, the goal of this research work was set to compile the Bangladeshi National Corpus (BDNC). This paper proposes the development process of the BDNC (first phase- Bangla monolingual corpus). In this work, the whole task was divided into three major phases, where the goal of the first phase is to build a representative monolingual corpus that will include at least 100 million Bangla words. Whereas, in the second phase, there will be a sub-corpora that will consist of a parallel corpus having 1 million words in Bangla and English. However, at the third and final phase, the parallel corpus will incorporate 15 foreign languages (including English) comprising a weighted corpus size of at least 15 million words. Keywords— Bangla, Corpus, balanced, representative, monolingual corpus, multi-lingual corpus, translation corpus, parallel corpus. I. INTRODUCTION Bangla, also known as Bengali, is the national language in Bangladesh and even a mother tongue in the Indian state of West Bengal. Bangla has more than 260 million speakers worldwide, and it is the sixth most spoken language in the world [22]. However, Bangla is still considered a low-resource language because of the unavailability of a balanced corpus with digitally accessible resources. Corpus is a much needed structured data set of language instances that work as a heart for many tools of Natural Language Processing (NLP). Researchers of different scientific domains also find it as a useful tool. However, for many practical reasons, despite having such advantages, there are not many instances of the balanced, representative corpus being developed for Bangla. Bangla is already considered a low resource language as far as language technology concerns and the lack of having a standardized corpus is also a reason behind it. Needless to say, the relation between these two problems can be labelled as an example of bidirectional causation. Throughout the</s>
<s>document, we’ll be discussing our approaches to build the corpus and methods that we’ll be using in the course of corpus creation. Bangladesh is often considered as one of the fastest emerging nations in the world in terms of economic growth. Besides, the country has a reputation in utilizing IT in the most creative and effective ways to solve many of its problems. Yet, the challenges of the 4th Industrial revolution are enormous to countries like Bangladesh. For Bangladesh, the readiness for Industry 4.0 means, being equipped with some sets of prerequisites that include- Bangla Language Processing (NLP) techniques, tools, and various AI solutions (Bangla enabled) among other major phenomena. A well-made, maintained balanced and representative corpus gives a solid ground for Bangla NLP researches and other related fields, thus fostering the backbone development required to face the challenges of Industry 4.0. There are already some notable works being done in the field of corpus creation. Salam, Yamada and Nishino [2] proposed a balanced corpus for Bangla language for the first time. Sarkar, Pavel and Khan [17] attempted an automatic corpus creation process where they collected all the already available texts from the web and other offline resources as the text source of the corpus. Another attempt was the creation of CIIL corpus Dash and Chaudhuri [16], which was actually a collection of corpus or corpora of nine Indian languages including Bangla. The corpus has a size of 3 million words. Mumin, Shoeb, Selim and Iqbal have built a corpus titled SUPara [18], which was an English-Bangla parallel corpus in 2011. The corpus has more than 200000 words in either language. The same authors created another corpus named SUMono [14] in 2013 which was actually a monolingual corpus consisting of a word size of more than 27 million. This corpus was created following the framework of the American National Corpus. Another such parallel corpus creation attempt was carried out recently though the data collection method was crowd-sourcing [22]. This Bangla-English corpus has a total of 517 Bangla sentences and 2143 corresponding English translations while every Bangla sentence was translated by an average of 4 times via crowd-sourcing. Shamshed and Karim [20] proposed a corpus intended for an efficient way of information retrieval. A newspaper specific corpus was created by Majumder and Arafat [19] where the authors used texts from a Bangla daily newspaper for a particular year. Khan, Ferdousi and Sobhan [15] created another Bangla corpus titled “BDNC01”. The size of the corpus was 12 million words and the texts were collected from some of the Bangla daily newspapers and some Bangla literature. II. DEVELOPMENT PHASES OF BDNC A. First Phase (Bangla Monolingual Corpus) A monolingual corpus can be either general or special. In our scope, we are up to build the monolingual corpus as a general one so that it can eventually represent the national variety of colloquial Bangla language [13]. However, the corpus, in the long run, will also reflect the diachronic features of Bangla language. Below is</s>
<s>the flow-chart showing the principal steps that we have considered while developing our corpus (mono-lingual) in the first phase. Fig. 1. The development process of Bangla corpus B. Second & Third Phase (Multilingual Parallel Corpora) In the second and third phase of the Corpus development task, we will be using the following flowchart as the goal is to develop multilingual parallel corpora. Fig. 2. The development process of parallel corpus In principle, the goal of the second phase is to translate (human aided) or gather already translated texts (in both Bangla to English and English to Bangla) in order to build a parallel corpus (translation corpus) in Bangla-English. And the aim of the third phase of the project is to translate manually (human aided) popular website contents and other available resources written in Bangla or English to a number of foreign languages including- Arabic, Bangla, Spanish, French, Mandarin, Japanese, Korean, Hindi, Persian, Burmese, Bhutanese, Urdu, Russian, German, Portuguese and English. III. CORPUS DESIGN Before starting with building the actual corpus it is mandatory to design the corpus with proper alignment with the goal and purpose of the corpus itself. We considered two major criteria to design a corpus- one is the Purpose design and another one being Model design. The following table states the different minor notions that we have also considered while designing the criteria above. TABLE I. CORPUS DESIGN CRITERIA Purpose Design Model Design Scope of usage defining Corpus Typology design User defining Tagset design Service and QoS design Storage and Database design To make a balanced and representative corpus, we are following three independent selection criteria: domain, time and medium [2]. We followed the Chinese SINICA corpus design methodology and added three more attributes, author, writing level and target audience. Table II shows the proposed domain balance percentage. TABLE II. DOMAIN BALANCE PERCENTAGE Domain / Source Percentage Text Books 20% Mass Media 20% Literature 15% Spoken corpus 10% Translations 5% IV. THE DEVELOPMENT PROCESS OF THE BANGLA MONOLINGUAL CORPUS After designing the purpose and model of the corpus, one can start building the corpus. Following are the steps that we have followed to build the monolingual corpus. A. Collecting Raw Data In order to maintain representativeness and to build a balanced corpus, texts are to be collected from various sources that will ensure all the features (both spoken and written forms of colloquial Bangla language used in various domains) and objectives (balanced, having representativeness) that the corpus should hold. The text can be collected in many ways including the followings- using OCR, web-crawling, typewriting, existing electronic text, using STT etc. • Using OCR: Optical Character Recognition (OCR) is considered a way of obtaining electronic texts from books. In this case, human aided proofreading or editing is needed, to correct scanning errors and other technical errors. • Typing: Right now scanner machines and computer programs are not efficient enough at recognizing Bengali texts of different typefaces, lower-quality typography, or handwriting. Therefore, typing can be considered as a solution,</s>
<s>though it is a labour-intensive and resource-hungry option. Still, this method is better for leaflets, hand-written items, and recorded speech. • Existing electronic texts: There are many texts already exist in electronic form in Bengali which is a great source of text- such as Wikipedia, Baglapedia, Newspapers, Magazines and etc. • STT: Recorded speech can also be transformed into electronic texts using speech to text tools. This kind of component will help much in building a collection of texts of oral form. In our work, primarily we have collected the data from different web-domains (online newspaper, Wikipedia, Banglapedia etc.) using self-made web-crawler tools. The collected data mainly represent the written aspects of the Bangla language. However, according to the original plan, we’re about to include spoken corpus and scale up the current corpus to the targeted size. For collecting data from different websites we developed and used a web-crawler that can detect the targeted content and fetch it. B. Encoding Adjustment It’s needed to be assured that, all the collected texts are in UTF-8 (Unicode) format prior to proceeding further in building this corpus. If any of the text segments is found written in non UTF-8 then, these must be converted back into Unicode. During the text collection phase, we have found that not all the Bangla text data available online are in UTF-8 format. There are still some ANSI encoded Bangla texts available on the web for legacy reasons. To solve this problem, we have developed an encode-adjusting tool that looks for encoding issues across the collected texts and adjusts and convert encodings while required. C. Filtering The collected text must be filtered for any unwanted, unrecognized, foreign language, misspelt words and garbage characters. Filtering can be done automatically by developing tools specifically designed for Bangla language. Primarily we have taken care of the unwanted characters, symbols and spacing issues persisting in the electronic texts using a home-developed tool. However, due to lack of an advanced spell checker, we couldn’t check the spellings of the texts. In fact, in our current scope, we do not intend to check spellings as it is just a written corpus for now. D. Word Segmentation & Tokenizing The next big step after filtering is segmentation/tokenizing. The process of segmenting running text into words and sentences is called tokenizing. For languages like Bangla where word segmentation can be performed by a simple script given white-space and punctuation, but still, it doesn’t guarantee a 100 percent success. A tokenizer capable of handling as many as linguistically ambiguous features can only be accepted here. A token has to be linguistically significant and Methodologically useful. In our work, we have developed a beginner level tokenizer that can break a running sentence into word forms which were later labelled by the annotator. E. Annotation (Tagging) We’ll be using the universal format of CoNLL-U for annotation purpose. In CoNLL-U format, annotations are encoded in plain text files (UTF-8, using only the LF character as line break) with three types</s>
<s>of lines: 1. Word lines containing the annotation of a word/token in 10 fields separated by single tab characters. The fields are namely- ID, FORM, LEMMA, UPOSTAG, XPOSTAG, FEATS, HEAD, DEPREL, DEPS, MISC 2. Blank lines marking sentence boundaries. 3. Comment lines starting with a hash (#). Example of annotating a Bangla sentence using CoNLL-U format: # newdoc id = Rabindra_cd_20170926063000_BN # sent_id = Rabindra_cd_20170926063000_BN-0001 # text = রােজশ ু েল যায়। 1 রােজশ রােজশ PROPN NNP Number=Sing 0 root _ _ 2 ু েল ু ল NOUN NN Number=Sing 1 obl _ _ 3 যায় যায় VERB VBZ Mood=Ind|Tense=Present 1 _ _ 4 । । PUNCT । _ 1 punct _ _ V. TOOLS TO UTILIZE THIS CORPUS We have developed some corpus analyzer tools of our own as there are very few resources available in this segment. Very few of the tools available nowadays support Bangla language. We have developed a frequency analyzer, N-grams (lexical bundles), concordance (node, KWIC, sorting, expanded context). VI. RESULT AND ANALYSIS Following are some of the results that were analyzed by the tools that we have developed. We have separated our corpus in 4 different plain text files of different sizes without compromising any of the qualitative features of the corpus like text-domain and other text qualities. Four parts of the plain text containing files were created in this separation process namely- mini, kilo, mega, giga. The reason behind such segmentation of the corpus file was that we wanted to make sure the corpus is easily manageable and scalable. A. Data structure Our primary analysis suggests that the 4 documents contain a number of 7,678,597 total words (tokens) while all the documents combined hold a total of and 285,496 unique word forms (types). The weighted average of Type-Token Ratio all the corpus is 0.0372 TABLE III. WORD TYPES AND DISTRIBUTIONS IN THE CORPUS (4 FILES) File Words Types Ratio Word/sentence Mini 445868 45254 0.10149 14.269145838 Kilo 756241 62095 0.08211 14.262239740 Meg2328455 137241 0.05894 13.801686938 Giga 4148033 180135 0.04342 14.211335402 Document Length: Longest: giga (4148033 words); mega (2328455 words) Shortest: mini (445868 words); kilo (756241 words) B. Word frequency It’s known that, the most frequent words in a written corpus are usually the stop words. Stop words are generally filtered out in many applications of NLP and other studies. However, here we have considered all the varieties of lexical items while preparing the word frequency list. The following table shows a frequency analysis of the lexical items that persist in the corpus. TABLE IV. MOST FREQUENT WORDS IN THE CORPUS Word frequen % word frequency ও 74865 0.97498280 এই 27406 0.35691416 এ 51757 0.67404241 বেলন 23384 0.30453480 না 51418 0.66962754 িতিন 22981 0.29928645 কের 50955 0.66359779 এবং 22596 0.29427251 থেক 39744 0.51759456 িনেয় 22416 0.29192833 হয় 34932 0.45492686 এর 21860 0.28468742 করা 34615 0.45079850 হে 21447 0.27930884 হেব 29297 0.38154105 এক 21220 0.27635257 হেয়28015 0.36484530 কর21186 0.27590978 জন 27663 0.36026113 ম20931 0.27258886 C. Type-Token Ratio (TTR) The ratio of the total number of words</s>
<s>(token) in a document to the number of unique words (types) in the document is called Type-Token Ratio. Highest: mini (0.101) kilo (0.082) Lowest: giga (0.043) mega (0.059) A lower vocabulary usually density indicates complex text with a pool of unique words, and a higher ratio indicates simpler text with words reused. The data indicates that, the file mini and kilo contain more ‘function words’ in regard to unique or content words than their siblings’-giga and mega. Average Words per Sentence: In our corpus, we have found that the weighted average of words per sentence in our corpus is: 14.1. Below is the file specific average word per sentence rate Highest: mini (14.3) kilo (14.3) Lowest: giga (14.2) mega (13.8) D. Collocation (N-gram analysis) We have analyzed most co-occurring words or words cluster known as collocation using N-gram architecture. Below are some of the discovered collocation data of the corpus which was measured using different N-gram techniques (uni-gram and trigram). TABLE V. THE COLLOCATION OF THE WORDS IN THE CORPUS (UNI-GRAM) Worcount collocatcounword count collocacounকরা 34615 হয় 7469 করা 34615 হে 1808 করা 34615 হেয়েছ 7204 হয় 34932 না 1670 এ 51757 ছাড়া 3457 এ 51757 ধরেনর 1649 এ 51757 সময় 3232 না 51418 থাকেল 1541 করা 34615 হেব 3034 হয় 34932 এ 1526 হেব 29297 না 2499 এ 51757 িবষেয় 1500 এ 51757 ব াপাের 2197 এ 51757 জন 1354 করা 34615 হে 1808 এ 51757 কথা 1284 হয় 34932 না 1670 হেব 29297 এর 1249 এ 51757 ধরেনর 1649 হেয়েছ 28015 এ 1239 TABLE VI. COLLOCATION OF THE WORDS IN THE CORPUS (TRI-GRAM) Worcount collo-cate count word count collo-cate count করা 34615 হয় 7736 এ 51757 করা 2493 করা 34615 হেয়েছ 7431 এ 51757 ব াপাের 2297 এ 51757 ছাড়া 3517 হয় 34932 না 2210 করা 34615 হেব 3505 হেয়েছ 28015 এ 2201 এ 51757 সময় 3496 এ 51757 জন 2095 হেব 29297 না 3103 না 51418 করেত 2088 হয় 34932 এ 2955 না 51418 করা 2073 না 51418 কােনা 2737 করা 34615 না 2030 কের 50955 এ 2670 না 51418 না 1996 করা 34615 এ 2581 কের 50955 থেক 1977 E. Data visualization We have analyzed the data using many other techniques and tools available and developed by us and now we are to visualize some of the aspects of the corpus. Here are some examples of comparative corpus data visualization across multiple corpus data files. Relative frequency: To find the relative frequency of any lexical item in our corpus, we need to divide the frequency of the lexical item by the total number of lexical items in the sample. In our case, the samples are the 4 separated data files of the corpus. The following chart shows the relative frequencies of the most frequent words across 4 different corpus data files. Fig. 3. Relative frequency of the top 4 (most frequent) words F. Grammatical analysis: We wanted to use the corpus for some more linguistic (traditional grammatical) researches as shown in Fig. 3. Therefore, we observed</s>
<s>the comparative frequency of some of ‘অব য়’ (which is a part of speech or grammatical category name in Bangla grammar). In comparison to English grammar, ‘অব য়’ can be used as both prepositions, conjunction and interjection in a sentence of Bangla language. ‘ও’ and ‘এবং’ are a somewhat similar type of POS in Bangla language considering their semantic boundary and are used as a conjunction. We wanted to see how frequent are these two words and which one is more frequent than the other in Bangla language (in the context of our corpus). Below is the graph showing the result in Fig 4. Fig. 4. relative frequency of Bangla ‘অব য়’- ( ‘ও’, ‘এবং’) CONCLUSION The development of a corpus in our targeted scale is not only a huge task but also a tiring and a resource-hungry job. However, still, we have compiled a corpus having a size of over 7.6 million in size. Due to limitation of time and resources, we could not annotate the entire corpus with the full features that we have primarily expected. In the future, we are going to annotate the entire corpus and scale up the size of the existing corpus. Therefore we will start developing the parallel corpus shortly. REFERENCES [1] Gerrit Botha and Etienne Barnard, 2005. Two approaches to gathering text corpora from the World Wide Web, Proceedings of the 16th Annual Symposium of the Pattern Recognition Association of South Africa. [2] Salam, K. M. A., Yamada, S., & Nishino, T. (2012, May). Developing the first balanced corpus for Bangla language. In Informatics, Electronics & Vision (ICIEV), 2012 International Conference on (pp. 1081). IEEE. [3] Salam, K. M. A., Yamada, S. and Nishino, T. 2010. "English-Bengali Parallel Corpus: A Proposal", Tokyo, TriSAI – 2010 [4] Salam, K. M. A., Yamada, S., Nishino, T. Mumit Khan, 2009 "Example-Based English-Bengali Machine Translation Using WordNet", Tokyo, TriSAI – 2009 [5] Tony McEnery and Andrew Wilson, 1996. Corpus Linguistics, Edinburgh University Press. [6] Yeasir Arafat, Md. Zahurul Islam and Mumit Khan, 2006. Analysis and Observations From a Bangla news corpus, Proc. of 9th International Conference on Computer and Information Technology, Dhaka, Bangladesh. [7] Baker, Mona (1995) “Corpora in translation studies: an overview and some suggestions for future research” Target 7, 2, pp 223-243. [8] Biber, Douglas (1993) “Representativeness in corpus design”, in Literary and Linguistic Computing, 8, pp 243-257. [9] Chen, Kehjiann, Chu-ren Huang, Li-ping Chang and Hui-li Hsu. 1996. SINICA CORPUS: Design methodology for balanced corpora. Language, Information and Computation 11:167-176. [10] Dash, Niladri Sekhar and Chaudhuri, B.B. 2001. A corpus-based study of the Bengali language. Indian Journal of Linguistics. Vol.20. No.1. Pp. 19-40. [11] Dewan Shahriar Hossain Pavel, Asif Iqbal Sarkar and Mumit Khan, 2006. A Proposed Automated Extraction Procedure of Bangla Text for Corpus Creation in Unicode, Proc. International Conference on Computer Processing of Bengali. [12] Frankenberg-Garcia, A. and Santos, D. (2003) “Introducing COMPARA: the Portuguese-English Parallel Corpus”, Corpora in translator education, Citeseer pp 71—87. [13] Zanettin, F. (2011). Translation and corpus</s>
<s>design. [14] M. A. Al Mumin, A. A. M. Shoeb, M. R. Selim, and M. Z. Iqbal, “Sumono: A representative modern bengali corpus,” SUST Journal of Science and Technology, vol. 21, pp. 78–86, 2014. [15] S. Khan, A. Ferdousi, and M. A. Sobhan, “Creation and analysis of a new bangla text corpus bdnc01,” International Journal for Research in Applied Science & Engineering Technology (IJRASET), vol. 5, 2017. [16] N. S. Dash, B. B. Chaudhuri, P. Rayson, A. Wilson, T. McEnery, A. Hardie, and S. Khoja, “Corpus-based empirical analysis of form, function and frequency of characters used in bangla,” in Published in Rayson, P., Wilson, A., McEnery, T., Hardie, A., and Khoja, S.,(eds.) Special issue of the Proceedings of the Corpus Linguistics 2001 Conference, Lancaster: Lancaster University Press. UK, vol. 13, 2001, pp. 144. [17] A. I. Sarkar, D. S. H. Pavel, and M. Khan, “Automatic bangla corpus creation,” BRAC University, Tech. Rep., 2007. [18] M. A. Al Mumin, A. A. M. Shoeb, M. R. Selim, and M. Z. Iqbal, “Supara: A balanced english-bengali parallel corpus,” 2012. [19] K. M. Majumder and Y. Arafat, “Analysis of and observations from a bangla news corpus,” 2006. [20] J. Shamshed and S. M. Karim, “A novel bangla text corpus building method for efficient information retrieval,” Journal of Convergence Information Technology, vol. 1, no. 1, pp. 36–40, 2010. [21] Arora, S., Arora, K. K., Roy, M. K., Agrawal, S. S., & Murthy, B. K. (2016). Collaborative Speech Data Acquisition for Under Resourced Languages through Crowdsourcing. Procedia Computer Science, 81, 37-44. [22] Nowshin, N., Ritu, Z. S., & Ismail, S. (2018, December). A Crowd-Source Based Corpus on Bangla to English Translation. In 2018 21st International Conference of Computer and Information Technology (ICCIT) (pp. 1-5). IEEE. [23] Salm, K. M., Salam, A., Khan, M., & Nishino, T. (2009). Example based English-Bengali machine translation using WordNet. [24] Khan, M. A. S., Uchida, H., & Nishino, T. (2011, November). How to develop universal vocabularies using automatic generation of the meaning of each word. 7th International Conference on Natural Language Processing and Knowledge Engineering. IEEE. [25] Salam, K. M. A., Yamada, S., & Nishino, T. (2011). Example-based machine translation for low-resource language using chunk-string templates. 13th Machine Translation Summit, Xiamen, China. [26] Salam, K. M. A., Yamada, S., & Nishino, T. (2013). How to translate unknown words for English to Bangla Machine Translation using transliteration. Journal of computers, 8(5), 1167-1174. [27] Salam, K. M. A., Uchida, H., Yamada, S., & Nishino, T. (2012, August). UNL Ontology Visualization for Web. In 2012 13th ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (pp. 542-545). IEEE. [28] Salam, K. M. A., Uchida, H., & Nishino, T. (2012, December). Multilingual universal word explanation generation from unl ontology. In 24th International Conference on Computational Linguistics (p. 137). [29] Salam, K. M. A., Setsuo, Y., & Nishino, T. (2011, December). Translating unknown words using WordNet and IPA-based-transliteration. In 14th International Conference on Computer and Information Technology (ICCIT 2011) (pp. 481-486). IEEE. [30]</s>
<s>Uchida, H., Zhu, M., & Khan, M. A. S. (2012, December). UNL explorer. In Proceedings of COLING 2012: Demonstration Papers (pp. 453-458). [31] Salam, K. M. A., Uchida, H., Yamada, S., & Nishino, T. (2013). Web Based UNL Ontology Visualization. Journal of Convergence Information Technology, 8(13), 69. [32] Salam, K. M. A., Setsuo, Y., & Tetsuro, N. (2012, December). Sublexical Translations for Low-Resource Language. In Proceedings of the Workshop on Machine Translation and Parsing in Indian Languages (pp. 39). [33] Salam, K., Yamada, S., & Tetsuro, N. (2012). Phonetic Bengali Input Method for Computer and Mobile Devices. In the Proceeding of the Second Workshop on Advances in Text Input Methods (WTIM 2), COLING (pp. 73-78). [34] Chaudhury, S., Dasgupta, S., Munawar, A., Khan, M. A. S., & Tachibana, R. (2017, September). Text to image generative model using constrained embedding space mapping. In 2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP) (pp. 1-6). IEEE. [35] Salam, K. M. A., Setsuo, Y., & Nishino, T. Using WordNet to Handle the OOV Problem in English to Bangla Machine Translation. In GWC 2012 6th International Global Wordnet Conference (p. 35). [36] Salam, K. M. A., Yamada, S., & Tetsuro, N. (2017, July). Improve Example-Based Machine Translation Quality for Low-Resource Language Using Ontology. In International Conference on Applied Computing and Information Technology (pp. 67-90). Springer, Cham. [37] SALAM, K. M. A. (2014). Ontology Based Machine Translation for Bengali as Low-resource Language (Doctoral dissertation, UNIVERSITY OF ELECTRO-COMMUNICATIONS). [38] Salam, K. M. A., Uchida, H., Yamada, S., & Nishino, T. (2013, June). Universal Words relationship question-answering from UNL Ontology. In 2013 IEEE/ACIS 12th International Conference on Computer and Information Science (ICIS) (pp. 423-427). IEEE. [39] Salam, K. M. A., Tetsuro, N., & Yamada, S. (2012, December). Bangla Phonetic Input Method with Foreign Words Handling. In Proceedings of the Second Workshop on Advances in Text Input Methods (pp. 73)View publication statsView publication statshttps://www.researchgate.net/publication/339676286 /ASCII85EncodePages false /AllowTransparency false /AutoPositionEPSFiles true /AutoRotatePages /None /Binding /Left /CalGrayProfile (Gray Gamma 2.2) /CalRGBProfile (sRGB IEC61966-2.1) /CalCMYKProfile (U.S. Web Coated \050SWOP\051 v2) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Error /CompatibilityLevel 1.7 /CompressObjects /Off /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages true /CreateJobTicket false /DefaultRenderingIntent /Default /DetectBlends true /DetectCurves 0.0000 /ColorConversionStrategy /LeaveColorUnchanged /DoThumbnails false /EmbedAllFonts true /EmbedOpenType false /ParseICCProfilesInComments true /EmbedJobOptions true /DSCReportingLevel 0 /EmitDSCWarnings false /EndPage -1 /ImageMemory 1048576 /LockDistillerParams true /MaxSubsetPct 100 /Optimize true /OPM 0 /ParseDSCComments false /ParseDSCCommentsForDocInfo true /PreserveCopyPage true /PreserveDICMYKValues true /PreserveEPSInfo false /PreserveFlatness true /PreserveHalftoneInfo true /PreserveOPIComments false /PreserveOverprintSettings true /StartPage 1 /SubsetFonts true /TransferFunctionInfo /Remove /UCRandBGInfo /Preserve /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true /AbadiMT-CondensedLight /ACaslon-Italic /ACaslon-Regular /ACaslon-Semibold /ACaslon-SemiboldItalic /AdobeArabic-Bold /AdobeArabic-BoldItalic /AdobeArabic-Italic /AdobeArabic-Regular /AdobeHebrew-Bold /AdobeHebrew-BoldItalic /AdobeHebrew-Italic /AdobeHebrew-Regular /AdobeHeitiStd-Regular /AdobeMingStd-Light /AdobeMyungjoStd-Medium /AdobePiStd /AdobeSongStd-Light /AdobeThai-Bold /AdobeThai-BoldItalic /AdobeThai-Italic /AdobeThai-Regular /AGaramond-Bold /AGaramond-BoldItalic /AGaramond-Italic /AGaramond-Regular /AGaramond-Semibold /AGaramond-SemiboldItalic /AgencyFB-Bold /AgencyFB-Reg /AGOldFace-Outline /AharoniBold /Algerian /Americana /Americana-ExtraBold /AndaleMono /AndaleMonoIPA /AngsanaNew /AngsanaNew-Bold /AngsanaNew-BoldItalic /AngsanaNew-Italic /AngsanaUPC /AngsanaUPC-Bold /AngsanaUPC-BoldItalic /AngsanaUPC-Italic /Anna /ArialAlternative /ArialAlternativeSymbol /Arial-Black /Arial-BlackItalic /Arial-BoldItalicMT /Arial-BoldMT /Arial-ItalicMT /ArialMT /ArialMT-Black /ArialNarrow /ArialNarrow-Bold /ArialNarrow-BoldItalic /ArialNarrow-Italic /ArialRoundedMTBold /ArialUnicodeMS /ArrusBT-Bold /ArrusBT-BoldItalic /ArrusBT-Italic /ArrusBT-Roman /AvantGarde-Book /AvantGarde-BookOblique /AvantGarde-Demi /AvantGarde-DemiOblique /AvantGardeITCbyBT-Book /AvantGardeITCbyBT-BookOblique /BakerSignet /BankGothicBT-Medium /Barmeno-Bold</s>
<s>/Barmeno-ExtraBold /Barmeno-Medium /Barmeno-Regular /Baskerville /BaskervilleBE-Italic /BaskervilleBE-Medium /BaskervilleBE-MediumItalic /BaskervilleBE-Regular /Baskerville-Bold /Baskerville-BoldItalic /Baskerville-Italic /BaskOldFace /Batang /BatangChe /Bauhaus93 /Bellevue /BellMT /BellMTBold /BellMTItalic /BerlingAntiqua-Bold /BerlingAntiqua-BoldItalic /BerlingAntiqua-Italic /BerlingAntiqua-Roman /BerlinSansFB-Bold /BerlinSansFBDemi-Bold /BerlinSansFB-Reg /BernardMT-Condensed /BernhardModernBT-Bold /BernhardModernBT-BoldItalic /BernhardModernBT-Italic /BernhardModernBT-Roman /BiffoMT /BinnerD /BinnerGothic /BlackadderITC-Regular /Blackoak /blex /blsy /Bodoni /Bodoni-Bold /Bodoni-BoldItalic /Bodoni-Italic /BodoniMT /BodoniMTBlack /BodoniMTBlack-Italic /BodoniMT-Bold /BodoniMT-BoldItalic /BodoniMTCondensed /BodoniMTCondensed-Bold /BodoniMTCondensed-BoldItalic /BodoniMTCondensed-Italic /BodoniMT-Italic /BodoniMTPosterCompressed /Bodoni-Poster /Bodoni-PosterCompressed /BookAntiqua /BookAntiqua-Bold /BookAntiqua-BoldItalic /BookAntiqua-Italic /Bookman-Demi /Bookman-DemiItalic /Bookman-Light /Bookman-LightItalic /BookmanOldStyle /BookmanOldStyle-Bold /BookmanOldStyle-BoldItalic /BookmanOldStyle-Italic /BookshelfSymbolOne-Regular /BookshelfSymbolSeven /BookshelfSymbolThree-Regular /BookshelfSymbolTwo-Regular /Botanical /Boton-Italic /Boton-Medium /Boton-MediumItalic /Boton-Regular /Boulevard /BradleyHandITC /Braggadocio /BritannicBold /Broadway /BrowalliaNew /BrowalliaNew-Bold /BrowalliaNew-BoldItalic /BrowalliaNew-Italic /BrowalliaUPC /BrowalliaUPC-Bold /BrowalliaUPC-BoldItalic /BrowalliaUPC-Italic /BrushScript /BrushScriptMT /CaflischScript-Bold /CaflischScript-Regular /Calibri /Calibri-Bold /Calibri-BoldItalic /Calibri-Italic /CalifornianFB-Bold /CalifornianFB-Italic /CalifornianFB-Reg /CalisMTBol /CalistoMT /CalistoMT-BoldItalic /CalistoMT-Italic /Cambria /Cambria-Bold /Cambria-BoldItalic /Cambria-Italic /CambriaMath /Candara /Candara-Bold /Candara-BoldItalic /Candara-Italic /Carta /CaslonOpenfaceBT-Regular /Castellar /CastellarMT /Centaur /Centaur-Italic /Century /CenturyGothic /CenturyGothic-Bold /CenturyGothic-BoldItalic /CenturyGothic-Italic /CenturySchL-Bold /CenturySchL-BoldItal /CenturySchL-Ital /CenturySchL-Roma /CenturySchoolbook /CenturySchoolbook-Bold /CenturySchoolbook-BoldItalic /CenturySchoolbook-Italic /CGTimes-Bold /CGTimes-BoldItalic /CGTimes-Italic /CGTimes-Regular /CharterBT-Bold /CharterBT-BoldItalic /CharterBT-Italic /CharterBT-Roman /CheltenhamITCbyBT-Bold /CheltenhamITCbyBT-BoldItalic /CheltenhamITCbyBT-Book /CheltenhamITCbyBT-BookItalic /Chiller-Regular /Cmb10 /CMB10 /Cmbsy10 /CMBSY10 /CMBSY5 /CMBSY6 /CMBSY7 /CMBSY8 /CMBSY9 /Cmbx10 /CMBX10 /Cmbx12 /CMBX12 /Cmbx5 /CMBX5 /Cmbx6 /CMBX6 /Cmbx7 /CMBX7 /Cmbx8 /CMBX8 /Cmbx9 /CMBX9 /Cmbxsl10 /CMBXSL10 /Cmbxti10 /CMBXTI10 /Cmcsc10 /CMCSC10 /Cmcsc8 /CMCSC8 /Cmcsc9 /CMCSC9 /Cmdunh10 /CMDUNH10 /Cmex10 /CMEX10 /CMEX7 /CMEX8 /CMEX9 /Cmff10 /CMFF10 /Cmfi10 /CMFI10 /Cmfib8 /CMFIB8 /Cminch /CMINCH /Cmitt10 /CMITT10 /Cmmi10 /CMMI10 /Cmmi12 /CMMI12 /Cmmi5 /CMMI5 /Cmmi6 /CMMI6 /Cmmi7 /CMMI7 /Cmmi8 /CMMI8 /Cmmi9 /CMMI9 /Cmmib10 /CMMIB10 /CMMIB5 /CMMIB6 /CMMIB7 /CMMIB8 /CMMIB9 /Cmr10 /CMR10 /Cmr12 /CMR12 /Cmr17 /CMR17 /Cmr5 /CMR5 /Cmr6 /CMR6 /Cmr7 /CMR7 /Cmr8 /CMR8 /Cmr9 /CMR9 /Cmsl10 /CMSL10 /Cmsl12 /CMSL12 /Cmsl8 /CMSL8 /Cmsl9 /CMSL9 /Cmsltt10 /CMSLTT10 /Cmss10 /CMSS10 /Cmss12 /CMSS12 /Cmss17 /CMSS17 /Cmss8 /CMSS8 /Cmss9 /CMSS9 /Cmssbx10 /CMSSBX10 /Cmssdc10 /CMSSDC10 /Cmssi10 /CMSSI10 /Cmssi12 /CMSSI12 /Cmssi17 /CMSSI17 /Cmssi8 /CMSSI8 /Cmssi9 /CMSSI9 /Cmssq8 /CMSSQ8 /Cmssqi8 /CMSSQI8 /Cmsy10 /CMSY10 /Cmsy5 /CMSY5 /Cmsy6 /CMSY6 /Cmsy7 /CMSY7 /Cmsy8 /CMSY8 /Cmsy9 /CMSY9 /Cmtcsc10 /CMTCSC10 /Cmtex10 /CMTEX10 /Cmtex8 /CMTEX8 /Cmtex9 /CMTEX9 /Cmti10 /CMTI10 /Cmti12 /CMTI12 /Cmti7 /CMTI7 /Cmti8 /CMTI8 /Cmti9 /CMTI9 /Cmtt10 /CMTT10 /Cmtt12 /CMTT12 /Cmtt8 /CMTT8 /Cmtt9 /CMTT9 /Cmu10 /CMU10 /Cmvtt10 /CMVTT10 /ColonnaMT /Colossalis-Bold /ComicSansMS /ComicSansMS-Bold /Consolas /Consolas-Bold /Consolas-BoldItalic /Consolas-Italic /Constantia /Constantia-Bold /Constantia-BoldItalic /Constantia-Italic /CooperBlack /CopperplateGothic-Bold /CopperplateGothic-Light /Copperplate-ThirtyThreeBC /Corbel /Corbel-Bold /Corbel-BoldItalic /Corbel-Italic /CordiaNew /CordiaNew-Bold /CordiaNew-BoldItalic /CordiaNew-Italic /CordiaUPC /CordiaUPC-Bold /CordiaUPC-BoldItalic /CordiaUPC-Italic /Courier /Courier-Bold /Courier-BoldOblique /CourierNewPS-BoldItalicMT /CourierNewPS-BoldMT /CourierNewPS-ItalicMT /CourierNewPSMT /Courier-Oblique /CourierStd /CourierStd-Bold /CourierStd-BoldOblique /CourierStd-Oblique /CourierX-Bold /CourierX-BoldOblique /CourierX-Oblique /CourierX-Regular /CreepyRegular /CurlzMT /David-Bold /David-Reg /DavidTransparent /Dcb10 /Dcbx10 /Dcbxsl10 /Dcbxti10 /Dccsc10 /Dcitt10 /Dcr10 /Desdemona /DilleniaUPC /DilleniaUPCBold /DilleniaUPCBoldItalic /DilleniaUPCItalic /Dingbats /DomCasual /Dotum /DotumChe /DoulosSIL /EdwardianScriptITC /Elephant-Italic /Elephant-Regular /EngraversGothicBT-Regular /EngraversMT /EraserDust /ErasITC-Bold /ErasITC-Demi /ErasITC-Light /ErasITC-Medium /ErieBlackPSMT /ErieLightPSMT /EriePSMT /EstrangeloEdessa /Euclid /Euclid-Bold /Euclid-BoldItalic /EuclidExtra /EuclidExtra-Bold /EuclidFraktur /EuclidFraktur-Bold /Euclid-Italic /EuclidMathOne /EuclidMathOne-Bold /EuclidMathTwo /EuclidMathTwo-Bold /EuclidSymbol /EuclidSymbol-Bold /EuclidSymbol-BoldItalic /EuclidSymbol-Italic /EucrosiaUPC /EucrosiaUPCBold /EucrosiaUPCBoldItalic /EucrosiaUPCItalic /EUEX10 /EUEX7 /EUEX8 /EUEX9 /EUFB10 /EUFB5 /EUFB7 /EUFM10 /EUFM5 /EUFM7 /EURB10 /EURB5 /EURB7 /EURM10 /EURM5 /EURM7 /EuroMono-Bold /EuroMono-BoldItalic /EuroMono-Italic /EuroMono-Regular /EuroSans-Bold /EuroSans-BoldItalic /EuroSans-Italic /EuroSans-Regular /EuroSerif-Bold /EuroSerif-BoldItalic /EuroSerif-Italic /EuroSerif-Regular /EUSB10 /EUSB5 /EUSB7 /EUSM10 /EUSM5 /EUSM7 /FelixTitlingMT /Fences /FencesPlain /FigaroMT /FixedMiriamTransparent /FootlightMTLight /Formata-Italic /Formata-Medium /Formata-MediumItalic /Formata-Regular /ForteMT /FranklinGothic-Book /FranklinGothic-BookItalic /FranklinGothic-Demi /FranklinGothic-DemiCond /FranklinGothic-DemiItalic /FranklinGothic-Heavy /FranklinGothic-HeavyItalic /FranklinGothicITCbyBT-Book /FranklinGothicITCbyBT-BookItal /FranklinGothicITCbyBT-Demi /FranklinGothicITCbyBT-DemiItal /FranklinGothic-Medium /FranklinGothic-MediumCond /FranklinGothic-MediumItalic /FrankRuehl /FreesiaUPC /FreesiaUPCBold /FreesiaUPCBoldItalic /FreesiaUPCItalic /FreestyleScript-Regular /FrenchScriptMT /Frutiger-Black /Frutiger-BlackCn /Frutiger-BlackItalic /Frutiger-Bold /Frutiger-BoldCn /Frutiger-BoldItalic /Frutiger-Cn /Frutiger-ExtraBlackCn /Frutiger-Italic /Frutiger-Light /Frutiger-LightCn /Frutiger-LightItalic /Frutiger-Roman /Frutiger-UltraBlack /Futura-Bold /Futura-BoldOblique /Futura-Book /Futura-BookOblique /FuturaBT-Bold /FuturaBT-BoldItalic /FuturaBT-Book /FuturaBT-BookItalic</s>
<s>/FuturaBT-Medium /FuturaBT-MediumItalic /Futura-Light /Futura-LightOblique /GalliardITCbyBT-Bold /GalliardITCbyBT-BoldItalic /GalliardITCbyBT-Italic /GalliardITCbyBT-Roman /Garamond /Garamond-Bold /Garamond-BoldCondensed /Garamond-BoldCondensedItalic /Garamond-BoldItalic /Garamond-BookCondensed /Garamond-BookCondensedItalic /Garamond-Italic /Garamond-LightCondensed /Garamond-LightCondensedItalic /Gautami /GeometricSlab703BT-Light /GeometricSlab703BT-LightItalic /Georgia /Georgia-Bold /Georgia-BoldItalic /Georgia-Italic /GeorgiaRef /Giddyup /Giddyup-Thangs /Gigi-Regular /GillSans /GillSans-Bold /GillSans-BoldItalic /GillSans-Condensed /GillSans-CondensedBold /GillSans-Italic /GillSans-Light /GillSans-LightItalic /GillSansMT /GillSansMT-Bold /GillSansMT-BoldItalic /GillSansMT-Condensed /GillSansMT-ExtraCondensedBold /GillSansMT-Italic /GillSans-UltraBold /GillSans-UltraBoldCondensed /GloucesterMT-ExtraCondensed /Gothic-Thirteen /GoudyOldStyleBT-Bold /GoudyOldStyleBT-BoldItalic /GoudyOldStyleBT-Italic /GoudyOldStyleBT-Roman /GoudyOldStyleT-Bold /GoudyOldStyleT-Italic /GoudyOldStyleT-Regular /GoudyStout /GoudyTextMT-LombardicCapitals /GSIDefaultSymbols /Gulim /GulimChe /Gungsuh /GungsuhChe /Haettenschweiler /HarlowSolid /Harrington /Helvetica /Helvetica-Black /Helvetica-BlackOblique /Helvetica-Bold /Helvetica-BoldOblique /Helvetica-Condensed /Helvetica-Condensed-Black /Helvetica-Condensed-BlackObl /Helvetica-Condensed-Bold /Helvetica-Condensed-BoldObl /Helvetica-Condensed-Light /Helvetica-Condensed-LightObl /Helvetica-Condensed-Oblique /Helvetica-Fraction /Helvetica-Narrow /Helvetica-Narrow-Bold /Helvetica-Narrow-BoldOblique /Helvetica-Narrow-Oblique /Helvetica-Oblique /HighTowerText-Italic /HighTowerText-Reg /Humanist521BT-BoldCondensed /Humanist521BT-Light /Humanist521BT-LightItalic /Humanist521BT-RomanCondensed /Imago-ExtraBold /Impact /ImprintMT-Shadow /InformalRoman-Regular /IrisUPC /IrisUPCBold /IrisUPCBoldItalic /IrisUPCItalic /Ironwood /ItcEras-Medium /ItcKabel-Bold /ItcKabel-Book /ItcKabel-Demi /ItcKabel-Medium /ItcKabel-Ultra /JasmineUPC /JasmineUPC-Bold /JasmineUPC-BoldItalic /JasmineUPC-Italic /JoannaMT /JoannaMT-Italic /Jokerman-Regular /JuiceITC-Regular /Kartika /Kaufmann /KaufmannBT-Bold /KaufmannBT-Regular /KidTYPEPaint /KinoMT /KodchiangUPC /KodchiangUPC-Bold /KodchiangUPC-BoldItalic /KodchiangUPC-Italic /KorinnaITCbyBT-Regular /KristenITC-Regular /KrutiDev040Bold /KrutiDev040BoldItalic /KrutiDev040Condensed /KrutiDev040Italic /KrutiDev040Thin /KrutiDev040Wide /KrutiDev060 /KrutiDev060Bold /KrutiDev060BoldItalic /KrutiDev060Condensed /KrutiDev060Italic /KrutiDev060Thin /KrutiDev060Wide /KrutiDev070 /KrutiDev070Condensed /KrutiDev070Italic /KrutiDev070Thin /KrutiDev070Wide /KrutiDev080 /KrutiDev080Condensed /KrutiDev080Italic /KrutiDev080Wide /KrutiDev090 /KrutiDev090Bold /KrutiDev090BoldItalic /KrutiDev090Condensed /KrutiDev090Italic /KrutiDev090Thin /KrutiDev090Wide /KrutiDev100 /KrutiDev100Bold /KrutiDev100BoldItalic /KrutiDev100Condensed /KrutiDev100Italic /KrutiDev100Thin /KrutiDev100Wide /KrutiDev120 /KrutiDev120Condensed /KrutiDev120Thin /KrutiDev120Wide /KrutiDev130 /KrutiDev130Condensed /KrutiDev130Thin /KrutiDev130Wide /KunstlerScript /Latha /LatinWide /LetterGothic /LetterGothic-Bold /LetterGothic-BoldOblique /LetterGothic-BoldSlanted /LetterGothicMT /LetterGothicMT-Bold /LetterGothicMT-BoldOblique /LetterGothicMT-Oblique /LetterGothic-Slanted /LevenimMT /LevenimMTBold /LilyUPC /LilyUPCBold /LilyUPCBoldItalic /LilyUPCItalic /Lithos-Black /Lithos-Regular /LotusWPBox-Roman /LotusWPIcon-Roman /LotusWPIntA-Roman /LotusWPIntB-Roman /LotusWPType-Roman /LucidaBright /LucidaBright-Demi /LucidaBright-DemiItalic /LucidaBright-Italic /LucidaCalligraphy-Italic /LucidaConsole /LucidaFax /LucidaFax-Demi /LucidaFax-DemiItalic /LucidaFax-Italic /LucidaHandwriting-Italic /LucidaSans /LucidaSans-Demi /LucidaSans-DemiItalic /LucidaSans-Italic /LucidaSans-Typewriter /LucidaSans-TypewriterBold /LucidaSans-TypewriterBoldOblique /LucidaSans-TypewriterOblique /LucidaSansUnicode /Lydian /Magneto-Bold /MaiandraGD-Regular /Mangal-Regular /Map-Symbols /MathA /MathB /MathC /Mathematica1 /Mathematica1-Bold /Mathematica1Mono /Mathematica1Mono-Bold /Mathematica2 /Mathematica2-Bold /Mathematica2Mono /Mathematica2Mono-Bold /Mathematica3 /Mathematica3-Bold /Mathematica3Mono /Mathematica3Mono-Bold /Mathematica4 /Mathematica4-Bold /Mathematica4Mono /Mathematica4Mono-Bold /Mathematica5 /Mathematica5-Bold /Mathematica5Mono /Mathematica5Mono-Bold /Mathematica6 /Mathematica6Bold /Mathematica6Mono /Mathematica6MonoBold /Mathematica7 /Mathematica7Bold /Mathematica7Mono /Mathematica7MonoBold /MatisseITC-Regular /MaturaMTScriptCapitals /Mesquite /Mezz-Black /Mezz-Regular /MICR /MicrosoftSansSerif /MingLiU /Minion-BoldCondensed /Minion-BoldCondensedItalic /Minion-Condensed /Minion-CondensedItalic /Minion-Ornaments /MinionPro-Bold /MinionPro-BoldIt /MinionPro-It /MinionPro-Regular /Miriam /MiriamFixed /MiriamTransparent /Mistral /Modern-Regular /MonotypeCorsiva /MonotypeSorts /MSAM10 /MSAM5 /MSAM6 /MSAM7 /MSAM8 /MSAM9 /MSBM10 /MSBM5 /MSBM6 /MSBM7 /MSBM8 /MSBM9 /MS-Gothic /MSHei /MSLineDrawPSMT /MS-Mincho /MSOutlook /MS-PGothic /MS-PMincho /MSReference1 /MSReference2 /MSReferenceSansSerif /MSReferenceSansSerif-Bold /MSReferenceSansSerif-BoldItalic /MSReferenceSansSerif-Italic /MSReferenceSerif /MSReferenceSerif-Bold /MSReferenceSerif-BoldItalic /MSReferenceSerif-Italic /MSReferenceSpecialty /MSSong /MS-UIGothic /MT-Extra /MTExtraTiger /MT-Symbol /MT-Symbol-Italic /MVBoli /Myriad-Bold /Myriad-BoldItalic /Myriad-Italic /Myriad-Roman /Narkisim /NewCenturySchlbk-Bold /NewCenturySchlbk-BoldItalic /NewCenturySchlbk-Italic /NewCenturySchlbk-Roman /NewMilleniumSchlbk-BoldItalicSH /NewsGothic /NewsGothic-Bold /NewsGothicBT-Bold /NewsGothicBT-BoldItalic /NewsGothicBT-Italic /NewsGothicBT-Roman /NewsGothic-Condensed /NewsGothic-Italic /NewsGothicMT /NewsGothicMT-Bold /NewsGothicMT-Italic /NiagaraEngraved-Reg /NiagaraSolid-Reg /NimbusMonL-Bold /NimbusMonL-BoldObli /NimbusMonL-Regu /NimbusMonL-ReguObli /NimbusRomNo9L-Medi /NimbusRomNo9L-MediItal /NimbusRomNo9L-Regu /NimbusRomNo9L-ReguItal /NimbusSanL-Bold /NimbusSanL-BoldCond /NimbusSanL-BoldCondItal /NimbusSanL-BoldItal /NimbusSanL-Regu /NimbusSanL-ReguCond /NimbusSanL-ReguCondItal /NimbusSanL-ReguItal /Nimrod /Nimrod-Bold /Nimrod-BoldItalic /Nimrod-Italic /NSimSun /Nueva-BoldExtended /Nueva-BoldExtendedItalic /Nueva-Italic /Nueva-Roman /NuptialScript /OCRA /OCRA-Alternate /OCRAExtended /OCRB /OCRB-Alternate /OfficinaSans-Bold /OfficinaSans-BoldItalic /OfficinaSans-Book /OfficinaSans-BookItalic /OfficinaSerif-Bold /OfficinaSerif-BoldItalic /OfficinaSerif-Book /OfficinaSerif-BookItalic /OldEnglishTextMT /Onyx /OnyxBT-Regular /OzHandicraftBT-Roman /PalaceScriptMT /Palatino-Bold /Palatino-BoldItalic /Palatino-Italic /PalatinoLinotype-Bold /PalatinoLinotype-BoldItalic /PalatinoLinotype-Italic /PalatinoLinotype-Roman /Palatino-Roman /PapyrusPlain /Papyrus-Regular /Parchment-Regular /Parisian /ParkAvenue /Penumbra-SemiboldFlare /Penumbra-SemiboldSans /Penumbra-SemiboldSerif /PepitaMT /Perpetua /Perpetua-Bold /Perpetua-BoldItalic /Perpetua-Italic /PerpetuaTitlingMT-Bold /PerpetuaTitlingMT-Light /PhotinaCasualBlack /Playbill /PMingLiU /Poetica-SuppOrnaments /PoorRichard-Regular /PopplLaudatio-Italic /PopplLaudatio-Medium /PopplLaudatio-MediumItalic /PopplLaudatio-Regular /PrestigeElite /Pristina-Regular /PTBarnumBT-Regular /Raavi /RageItalic /Ravie /RefSpecialty /Ribbon131BT-Bold /Rockwell /Rockwell-Bold /Rockwell-BoldItalic /Rockwell-Condensed /Rockwell-CondensedBold /Rockwell-ExtraBold /Rockwell-Italic /Rockwell-Light /Rockwell-LightItalic /Rod /RodTransparent /RunicMT-Condensed /Sanvito-Light /Sanvito-Roman /ScriptC /ScriptMTBold /SegoeUI /SegoeUI-Bold /SegoeUI-BoldItalic /SegoeUI-Italic /Serpentine-BoldOblique /ShelleyVolanteBT-Regular /ShowcardGothic-Reg /Shruti /SILDoulosIPA /SimHei /SimSun /SimSun-PUA /SnapITC-Regular /StandardSymL /Stencil /StoneSans /StoneSans-Bold /StoneSans-BoldItalic /StoneSans-Italic /StoneSans-Semibold /StoneSans-SemiboldItalic /Stop /Swiss721BT-BlackExtended /Sylfaen /Symbol /SymbolMT /SymbolTiger /SymbolTigerExpert /Tahoma /Tahoma-Bold /Tci1 /Tci1Bold /Tci1BoldItalic /Tci1Italic /Tci2 /Tci2Bold /Tci2BoldItalic /Tci2Italic /Tci3 /Tci3Bold /Tci3BoldItalic /Tci3Italic /Tci4 /Tci4Bold /Tci4BoldItalic /Tci4Italic /TechnicalItalic /TechnicalPlain /Tekton /Tekton-Bold /TektonMM /Tempo-HeavyCondensed /Tempo-HeavyCondensedItalic /TempusSansITC /Tiger /TigerExpert /Times-Bold /Times-BoldItalic /Times-BoldItalicOsF /Times-BoldSC /Times-ExtraBold /Times-Italic /Times-ItalicOsF /TimesNewRomanMT-ExtraBold /TimesNewRomanPS-BoldItalicMT /TimesNewRomanPS-BoldMT /TimesNewRomanPS-ItalicMT</s>
<s>/TimesNewRomanPSMT /Times-Roman /Times-RomanSC /Trajan-Bold /Trebuchet-BoldItalic /TrebuchetMS /TrebuchetMS-Bold /TrebuchetMS-Italic /Tunga-Regular /TwCenMT-Bold /TwCenMT-BoldItalic /TwCenMT-Condensed /TwCenMT-CondensedBold /TwCenMT-CondensedExtraBold /TwCenMT-CondensedMedium /TwCenMT-Italic /TwCenMT-Regular /Univers-Bold /Univers-BoldItalic /UniversCondensed-Bold /UniversCondensed-BoldItalic /UniversCondensed-Medium /UniversCondensed-MediumItalic /Univers-Medium /Univers-MediumItalic /URWBookmanL-DemiBold /URWBookmanL-DemiBoldItal /URWBookmanL-Ligh /URWBookmanL-LighItal /URWChanceryL-MediItal /URWGothicL-Book /URWGothicL-BookObli /URWGothicL-Demi /URWGothicL-DemiObli /URWPalladioL-Bold /URWPalladioL-BoldItal /URWPalladioL-Ital /URWPalladioL-Roma /USPSBarCode /VAGRounded-Black /VAGRounded-Bold /VAGRounded-Light /VAGRounded-Thin /Verdana /Verdana-Bold /Verdana-BoldItalic /Verdana-Italic /VerdanaRef /VinerHandITC /Viva-BoldExtraExtended /Vivaldii /Viva-LightCondensed /Viva-Regular /VladimirScript /Vrinda /Webdings /Westminster /Willow /Wingdings2 /Wingdings3 /Wingdings-Regular /WNCYB10 /WNCYI10 /WNCYR10 /WNCYSC10 /WNCYSS10 /WoodtypeOrnaments-One /WoodtypeOrnaments-Two /WP-ArabicScriptSihafa /WP-ArabicSihafa /WP-BoxDrawing /WP-CyrillicA /WP-CyrillicB /WP-GreekCentury /WP-GreekCourier /WP-GreekHelve /WP-HebrewDavid /WP-IconicSymbolsA /WP-IconicSymbolsB /WP-Japanese /WP-MathA /WP-MathB /WP-MathExtendedA /WP-MathExtendedB /WP-MultinationalAHelve /WP-MultinationalARoman /WP-MultinationalBCourier /WP-MultinationalBHelve /WP-MultinationalBRoman /WP-MultinationalCourier /WP-Phonetic /WPTypographicSymbols /XYATIP10 /XYBSQL10 /XYBTIP10 /XYCIRC10 /XYCMAT10 /XYCMBT10 /XYDASH10 /XYEUAT10 /XYEUBT10 /ZapfChancery-MediumItalic /ZapfDingbats /ZapfHumanist601BT-Bold /ZapfHumanist601BT-BoldItalic /ZapfHumanist601BT-Demi /ZapfHumanist601BT-DemiItalic /ZapfHumanist601BT-Italic /ZapfHumanist601BT-Roman /ZWAdobeF /NeverEmbed [ true /AntiAliasColorImages false /CropColorImages true /ColorImageMinResolution 150 /ColorImageMinResolutionPolicy /OK /DownsampleColorImages true /ColorImageDownsampleType /Bicubic /ColorImageResolution 300 /ColorImageDepth -1 /ColorImageMinDownsampleDepth 1 /ColorImageDownsampleThreshold 2.00333 /EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages true /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /ColorImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 150 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 2.00333 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /GrayImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 1200 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 600 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.00167 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1 /AllowPSXObjects false /CheckCompliance [ /None /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXOutputIntentProfile (None) /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False /CreateJDFFile false /Description << /ARA <FEFF06270633062A062E062F0645002006470630064700200627064406250639062F0627062F0627062A002006440625064606340627062100200648062B062706260642002000410064006F00620065002000500044004600200645062A064806270641064206290020064506390020064506420627064A064A0633002006390631063600200648063706280627063906290020062706440648062B0627062606420020062706440645062A062F062706480644062900200641064A00200645062C062706440627062A002006270644062306390645062706440020062706440645062E062A064406410629061B0020064A06450643064600200641062A062D00200648062B0627062606420020005000440046002006270644064506460634062306290020062806270633062A062E062F062706450020004100630072006F0062006100740020064800410064006F006200650020005200650061006400650072002006250635062F0627063100200035002E0030002006480627064406250635062F062706310627062A0020062706440623062D062F062B002E> /CHS <FEFF4f7f75288fd94e9b8bbe5b9a521b5efa7684002000410064006f006200650020005000440046002065876863900275284e8e55464e1a65876863768467e5770b548c62535370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c676562535f00521b5efa768400200050004400460020658768633002> /CHT <FEFF4f7f752890194e9b8a2d7f6e5efa7acb7684002000410064006f006200650020005000440046002065874ef69069752865bc666e901a554652d965874ef6768467e5770b548c52175370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c4f86958b555f5df25efa7acb76840020005000440046002065874ef63002> /CZE <FEFF005400610074006f0020006e006100730074006100760065006e00ed00200070006f0075017e0069006a007400650020006b0020007600790074007600e101590065006e00ed00200064006f006b0075006d0065006e0074016f002000410064006f006200650020005000440046002000760068006f0064006e00fd00630068002000700072006f002000730070006f006c00650068006c0069007600e90020007a006f006200720061007a006f007600e1006e00ed002000610020007400690073006b0020006f006200630068006f0064006e00ed0063006800200064006f006b0075006d0065006e0074016f002e002000200056007900740076006f01590065006e00e900200064006f006b0075006d0065006e007400790020005000440046002000620075006400650020006d006f017e006e00e90020006f007400650076015900ed007400200076002000700072006f006700720061006d0065006300680020004100630072006f00620061007400200061002000410064006f00620065002000520065006100640065007200200035002e0030002000610020006e006f0076011b006a016100ed00630068002e> /DAN <FEFF004200720075006700200069006e0064007300740069006c006c0069006e006700650072006e0065002000740069006c0020006100740020006f007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400650072002c0020006400650072002000650067006e006500720020007300690067002000740069006c00200064006500740061006c006a006500720065007400200073006b00e60072006d007600690073006e0069006e00670020006f00670020007500640073006b007200690076006e0069006e006700200061006600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020004400650020006f007000720065007400740065006400650020005000440046002d0064006f006b0075006d0065006e0074006500720020006b0061006e002000e50062006e00650073002000690020004100630072006f00620061007400200065006c006c006500720020004100630072006f006200610074002000520065006100640065007200200035002e00300020006f00670020006e0079006500720065002e> /DEU <FEFF00560065007200770065006e00640065006e0020005300690065002000640069006500730065002000450069006e007300740065006c006c0075006e00670065006e0020007a0075006d002000450072007300740065006c006c0065006e00200076006f006e002000410064006f006200650020005000440046002d0044006f006b0075006d0065006e00740065006e002c00200075006d002000650069006e00650020007a0075007600650072006c00e40073007300690067006500200041006e007a006500690067006500200075006e00640020004100750073006700610062006500200076006f006e00200047006500730063006800e40066007400730064006f006b0075006d0065006e00740065006e0020007a0075002000650072007a00690065006c0065006e002e00200044006900650020005000440046002d0044006f006b0075006d0065006e007400650020006b00f6006e006e0065006e0020006d006900740020004100630072006f00620061007400200075006e0064002000520065006100640065007200200035002e003000200075006e00640020006800f600680065007200200067006500f600660066006e00650074002000770065007200640065006e002e> /ESP <FEFF005500740069006c0069006300650020006500730074006100200063006f006e0066006900670075007200610063006900f3006e0020007000610072006100200063007200650061007200200064006f00630075006d0065006e0074006f0073002000640065002000410064006f00620065002000500044004600200061006400650063007500610064006f007300200070006100720061002000760069007300750061006c0069007a00610063006900f3006e0020006500200069006d0070007200650073006900f3006e00200064006500200063006f006e006600690061006e007a006100200064006500200064006f00630075006d0065006e0074006f007300200063006f006d00650072006300690061006c00650073002e002000530065002000700075006500640065006e00200061006200720069007200200064006f00630075006d0065006e0074006f00730020005000440046002000630072006500610064006f007300200063006f006e0020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e003000200079002000760065007200730069006f006e0065007300200070006f00730074006500720069006f007200650073002e> /FRA <FEFF005500740069006c006900730065007a00200063006500730020006f007000740069006f006e00730020006100660069006e00200064006500200063007200e900650072002000640065007300200064006f00630075006d0065006e00740073002000410064006f006200650020005000440046002000700072006f00660065007300730069006f006e006e0065006c007300200066006900610062006c0065007300200070006f007500720020006c0061002000760069007300750061006c00690073006100740069006f006e0020006500740020006c00270069006d007000720065007300730069006f006e002e0020004c0065007300200064006f00630075006d0065006e00740073002000500044004600200063007200e900e90073002000700065007500760065006e0074002000ea0074007200650020006f007500760065007200740073002000640061006e00730020004100630072006f006200610074002c002000610069006e00730069002000710075002700410064006f00620065002000520065006100640065007200200035002e0030002000650074002000760065007200730069006f006e007300200075006c007400e90072006900650075007200650073002e> /GRE <FEFF03a703c103b703c303b903bc03bf03c003bf03b903ae03c303c403b5002003b103c503c403ad03c2002003c403b903c2002003c103c503b803bc03af03c303b503b903c2002003b303b903b1002003bd03b1002003b403b703bc03b903bf03c503c103b303ae03c303b503c403b5002003ad03b303b303c103b103c603b1002000410064006f006200650020005000440046002003ba03b103c403ac03bb03bb03b703bb03b1002003b303b903b1002003b103be03b903cc03c003b903c303c403b7002003c003c103bf03b203bf03bb03ae002003ba03b103b9002003b503ba03c403cd03c003c903c303b7002003b503c003b903c703b503b903c103b703bc03b103c403b903ba03ce03bd002003b503b303b303c103ac03c603c903bd002e0020002003a403b10020005000440046002003ad03b303b303c103b103c603b1002003c003bf03c5002003ad03c703b503c403b5002003b403b703bc03b903bf03c503c103b303ae03c303b503b9002003bc03c003bf03c103bf03cd03bd002003bd03b1002003b103bd03bf03b903c703c403bf03cd03bd002003bc03b5002003c403bf0020004100630072006f006200610074002c002003c403bf002000410064006f00620065002000520065006100640065007200200035002e0030002003ba03b103b9002003bc03b503c403b103b303b503bd03ad03c303c403b503c103b503c2002003b503ba03b403cc03c303b503b903c2002e> /HEB <FEFF05D405E905EA05DE05E905D5002005D105D405D205D305E805D505EA002005D005DC05D4002005DB05D305D9002005DC05D905E605D505E8002005DE05E105DE05DB05D9002000410064006F006200650020005000440046002005E205D105D505E8002005D405E605D205D4002005D505D405D305E405E105D4002005D005DE05D905E005D4002005E905DC002005DE05E105DE05DB05D905DD002005E205E105E705D905D905DD002E002005DE05E105DE05DB05D90020005000440046002005E905E005D505E605E805D5002005E005D905EA05E005D905DD002005DC05E405EA05D905D705D4002005D105D005DE05E605E205D505EA0020004100630072006F006200610074002005D5002D00410064006F00620065002000520065006100640065007200200035002E0030002005D505D205E805E105D005D505EA002005DE05EA05E705D305DE05D505EA002005D905D505EA05E8002E05D905D505EA05E8002E002D0033002C002005E205D905D905E005D5002005D105DE05D305E805D905DA002005DC05DE05E905EA05DE05E9002005E905DC0020004100630072006F006200610074002E002005DE05E105DE05DB05D90020005000440046002005E905E005D505E605E805D5002005E005D905EA05E005D905DD002005DC05E405EA05D905D705D4002005D105D005DE05E605E205D505EA0020004100630072006F006200610074002005D5002D00410064006F00620065002000520065006100640065007200200035002E0030002005D505D205E805E105D005D505EA002005DE05EA05E705D305DE05D505EA002005D905D505EA05E8002E> /HRV (Za stvaranje Adobe PDF dokumenata pogodnih za pouzdani prikaz i ispis poslovnih dokumenata koristite ove postavke. Stvoreni PDF dokumenti mogu se otvoriti Acrobat i Adobe Reader 5.0 i kasnijim verzijama.) /HUN <FEFF00410020006800690076006100740061006c006f007300200064006f006b0075006d0065006e00740075006d006f006b0020006d00650067006200ed007a00680061007400f30020006d0065006700740065006b0069006e007400e9007300e900720065002000e900730020006e0079006f006d00740061007400e1007300e10072006100200073007a00e1006e0074002000410064006f00620065002000500044004600200064006f006b0075006d0065006e00740075006d006f006b0061007400200065007a0065006b006b0065006c0020006100200062006500e1006c006c00ed007400e10073006f006b006b0061006c00200068006f007a006800610074006a00610020006c00e9007400720065002e0020002000410020006c00e90074007200650068006f007a006f00740074002000500044004600200064006f006b0075006d0065006e00740075006d006f006b00200061007a0020004100630072006f006200610074002000e9007300200061007a002000410064006f00620065002000520065006100640065007200200035002e0030002c0020007600610067007900200061007a002000610074007400f3006c0020006b00e9007301510062006200690020007600650072007a006900f3006b006b0061006c0020006e00790069007400680061007400f3006b0020006d00650067002e> /ITA (Utilizzare queste impostazioni per creare documenti Adobe PDF adatti per visualizzare e stampare documenti aziendali in modo affidabile. I documenti PDF creati possono essere aperti con Acrobat e Adobe Reader 5.0 e versioni successive.) /JPN <FEFF30d330b830cd30b9658766f8306e8868793a304a3088307353705237306b90693057305f002000410064006f0062006500200050004400460020658766f8306e4f5c6210306b4f7f75283057307e305930023053306e8a2d5b9a30674f5c62103055308c305f0020005000440046002030d530a130a430eb306f3001004100630072006f0062006100740020304a30883073002000410064006f00620065002000520065006100640065007200200035002e003000204ee5964d3067958b304f30533068304c3067304d307e305930023053306e8a2d5b9a3067306f30d530a930f330c8306e57cb30818fbc307f3092884c3044307e30593002> /KOR <FEFFc7740020c124c815c7440020c0acc6a9d558c5ec0020be44c988b2c8c2a40020bb38c11cb97c0020c548c815c801c73cb85c0020bcf4ace00020c778c1c4d558b2940020b3700020ac00c7a50020c801d569d55c002000410064006f0062006500200050004400460020bb38c11cb97c0020c791c131d569b2c8b2e4002e0020c774b807ac8c0020c791c131b41c00200050004400460020bb38c11cb2940020004100630072006f0062006100740020bc0f002000410064006f00620065002000520065006100640065007200200035002e00300020c774c0c1c5d0c11c0020c5f40020c2180020c788c2b5b2c8b2e4002e> /NLD (Gebruik deze instellingen om Adobe PDF-documenten te maken waarmee zakelijke documenten betrouwbaar kunnen worden weergegeven en afgedrukt. De gemaakte PDF-documenten kunnen worden geopend met Acrobat en Adobe Reader 5.0 en hoger.) /NOR <FEFF004200720075006b00200064006900730073006500200069006e006e007300740069006c006c0069006e00670065006e0065002000740069006c002000e50020006f0070007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e00740065007200200073006f006d002000650072002000650067006e0065007400200066006f00720020007000e5006c006900740065006c006900670020007600690073006e0069006e00670020006f00670020007500740073006b007200690066007400200061007600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020005000440046002d0064006f006b0075006d0065006e00740065006e00650020006b0061006e002000e50070006e00650073002000690020004100630072006f00620061007400200065006c006c00650072002000410064006f00620065002000520065006100640065007200200035002e003000200065006c006c00650072002e> /POL <FEFF0055007300740061007700690065006e0069006100200064006f002000740077006f0072007a0065006e0069006100200064006f006b0075006d0065006e007400f300770020005000440046002000700072007a0065007a006e00610063007a006f006e00790063006800200064006f0020006e00690065007a00610077006f0064006e00650067006f002000770079015b0077006900650074006c0061006e00690061002000690020006400720075006b006f00770061006e0069006100200064006f006b0075006d0065006e007400f300770020006600690072006d006f0077007900630068002e002000200044006f006b0075006d0065006e0074007900200050004400460020006d006f017c006e00610020006f007400770069006500720061010700200077002000700072006f006700720061006d006900650020004100630072006f00620061007400200069002000410064006f00620065002000520065006100640065007200200035002e0030002000690020006e006f00770073007a0079006d002e> /PTB <FEFF005500740069006c0069007a006500200065007300730061007300200063006f006e00660069006700750072006100e700f50065007300200064006500200066006f0072006d00610020006100200063007200690061007200200064006f00630075006d0065006e0074006f0073002000410064006f00620065002000500044004600200061006400650071007500610064006f00730020007000610072006100200061002000760069007300750061006c0069007a006100e700e3006f002000650020006100200069006d0070007200650073007300e3006f00200063006f006e0066006900e1007600650069007300200064006500200064006f00630075006d0065006e0074006f007300200063006f006d0065007200630069006100690073002e0020004f007300200064006f00630075006d0065006e0074006f00730020005000440046002000630072006900610064006f007300200070006f00640065006d0020007300650072002000610062006500720074006f007300200063006f006d0020006f0020004100630072006f006200610074002000650020006f002000410064006f00620065002000520065006100640065007200200035002e0030002000650020007600650072007300f50065007300200070006f00730074006500720069006f007200650073002e> /RUM <FEFF005500740069006c0069007a00610163006900200061006300650073007400650020007300650074010300720069002000700065006e007400720075002000610020006300720065006100200064006f00630075006d0065006e00740065002000410064006f006200650020005000440046002000610064006500630076006100740065002000700065006e007400720075002000760069007a00750061006c0069007a00610072006500610020015f006900200074006900700103007200690072006500610020006c0061002000630061006c006900740061007400650020007300750070006500720069006f0061007201030020006100200064006f00630075006d0065006e00740065006c006f007200200064006500200061006600610063006500720069002e002000200044006f00630075006d0065006e00740065006c00650020005000440046002000630072006500610074006500200070006f00740020006600690020006400650073006300680069007300650020006300750020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e00300020015f00690020007600650072007300690075006e0069006c006500200075006c0074006500720069006f006100720065002e> /RUS <FEFF04180441043f043e043b044c04370443043904420435002004340430043d043d044b04350020043d0430044104420440043e0439043a043800200434043b044f00200441043e043704340430043d0438044f00200434043e043a0443043c0435043d0442043e0432002000410064006f006200650020005000440046002c0020043f043e04340445043e0434044f04490438044500200434043b044f0020043d0430043404350436043d043e0433043e0020043f0440043e0441043c043e044204400430002004380020043f04350447043004420438002004340435043b043e0432044b044500200434043e043a0443043c0435043d0442043e0432002e002000200421043e043704340430043d043d044b04350020005000440046002d0434043e043a0443043c0435043d0442044b0020043c043e0436043d043e0020043e0442043a0440044b043204300442044c002004410020043f043e043c043e0449044c044e0020004100630072006f00620061007400200438002000410064006f00620065002000520065006100640065007200200035002e00300020043800200431043e043b043504350020043f043e04370434043d043804450020043204350440044104380439002e> /SLV <FEFF005400650020006e006100730074006100760069007400760065002000750070006f0072006100620069007400650020007a00610020007500730074007600610072006a0061006e006a006500200064006f006b0075006d0065006e0074006f0076002000410064006f006200650020005000440046002c0020007000720069006d00650072006e006900680020007a00610020007a0061006e00650073006c006a00690076006f0020006f0067006c00650064006f00760061006e006a006500200069006e0020007400690073006b0061006e006a006500200070006f0073006c006f0076006e0069006800200064006f006b0075006d0065006e0074006f0076002e00200020005500730074007600610072006a0065006e006500200064006f006b0075006d0065006e0074006500200050004400460020006a00650020006d006f0067006f010d00650020006f0064007000720065007400690020007a0020004100630072006f00620061007400200069006e002000410064006f00620065002000520065006100640065007200200035002e003000200069006e0020006e006f00760065006a01610069006d002e> /SUO <FEFF004b00e40079007400e40020006e00e40069007400e4002000610073006500740075006b007300690061002c0020006b0075006e0020006c0075006f0074002000410064006f0062006500200050004400460020002d0064006f006b0075006d0065006e007400740065006a0061002c0020006a006f0074006b006100200073006f0070006900760061007400200079007200690074007900730061007300690061006b00690072006a006f006a0065006e0020006c0075006f00740065007400740061007600610061006e0020006e00e400790074007400e4006d0069007300650065006e0020006a0061002000740075006c006f007300740061006d0069007300650065006e002e0020004c0075006f0064007500740020005000440046002d0064006f006b0075006d0065006e00740069007400200076006f0069006400610061006e0020006100760061007400610020004100630072006f0062006100740069006c006c00610020006a0061002000410064006f00620065002000520065006100640065007200200035002e0030003a006c006c00610020006a006100200075007500640065006d006d0069006c006c0061002e> /SVE <FEFF0041006e007600e4006e00640020006400650020006800e4007200200069006e0073007400e4006c006c006e0069006e006700610072006e00610020006f006d002000640075002000760069006c006c00200073006b006100700061002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400200073006f006d00200070006100730073006100720020006600f60072002000740069006c006c006600f60072006c00690074006c006900670020007600690073006e0069006e00670020006f006300680020007500740073006b007200690066007400650072002000610076002000610066006600e4007200730064006f006b0075006d0065006e0074002e002000200053006b006100700061006400650020005000440046002d0064006f006b0075006d0065006e00740020006b0061006e002000f600700070006e00610073002000690020004100630072006f0062006100740020006f00630068002000410064006f00620065002000520065006100640065007200200035002e00300020006f00630068002000730065006e006100720065002e> /TUR <FEFF005400690063006100720069002000620065006c00670065006c006500720069006e0020006700fc00760065006e0069006c0069007200200062006900720020015f0065006b0069006c006400650020006700f6007200fc006e007400fc006c0065006e006d006500730069002000760065002000790061007a0064013100720131006c006d006100730131006e006100200075007900670075006e002000410064006f006200650020005000440046002000620065006c00670065006c0065007200690020006f006c0075015f007400750072006d0061006b0020006900e70069006e00200062007500200061007900610072006c0061007201310020006b0075006c006c0061006e0131006e002e00200020004f006c0075015f0074007500720075006c0061006e0020005000440046002000620065006c00670065006c0065007200690020004100630072006f006200610074002000760065002000410064006f00620065002000520065006100640065007200200035002e003000200076006500200073006f006e0072006100730131006e00640061006b00690020007300fc007200fc006d006c00650072006c00650020006100e70131006c006100620069006c00690072002e> /ENU (Use these settings to create Adobe PDF documents suitable for reliable viewing and printing of business documents. Created PDF documents can be opened with Acrobat and Adobe Reader 5.0 and later.)>> setdistillerparams /HWResolution</s>
<s>[600 600] /PageSize [612.000 792.000]>> setpagedevice</s>
<s>Word Sense Disambiguation in Bangla Language Using Supervised Methodology with Necessary ModificationsORIGINAL CONTRIBUTIONWord Sense Disambiguation in Bangla Language Using SupervisedMethodology with Necessary ModificationsAlok Ranjan Pal · Diganta Saha · Niladri Sekhar Dash ·Antara PalReceived: 10 July 2017 / Accepted: 11 May 2018© The Institution of Engineers (India) 2018Abstract An attempt is made in this paper to report how asupervised methodology has been adopted for the task ofword sense disambiguation in Bangla with necessarymodifications. At the initial stage, the Naı̈ve Bayes prob-abilistic model that has been adopted as a baseline methodfor sense classification, yields moderate result with 81%accuracy when applied on a database of 19 (nineteen) mostfrequently used Bangla ambiguous words. On experimentalbasis, the baseline method is modified with two extensions:(a) inclusion of lemmatization process into of the system,and (b) bootstrapping of the operational process. As aresult, the level of accuracy of the method is slightlyimproved up to 84% accuracy, which is a positive signalfor the whole process of disambiguation as it opens scopefor further modification of the existing method for betterresult. The data sets that have been used for this experimentinclude the Bangla POS tagged corpus obtained from theIndian Languages Corpora Initiative, and the BanglaWordNet, an online sense inventory developed at theIndian Statistical Institute, Kolkata. The paper also reportsabout the challenges and pitfalls of the work that have beenclosely observed and addressed to achieve expected levelof accuracy.Keywords Natural language processing ·Word sense disambiguation · Naı̈ve Bayes method ·Lemmatization · BootstrappingIntroductionIn every natural language there are so many words whichcarry different senses in different contexts of their use.These words are often recognized as ambiguous words andfinding the exact contextual sense of an ambiguous word ina piece of text is known as called Word Sense Disam-biguation (WSD) [1–5]. For example, the English wordshead, run, round, manage, etc. have multiple senses basedon their contexts of use in texts. Finding the exact senses ofthe words in a given context is the main challenge of WSD.Till date we have come across three major methodologiesthat are used to deal with this problem, namely, SupervisedMethod, Knowledge based Method and UnsupervisedMethod.In Supervised Method [4, 6–24], sense disambiguationof words is performed with the help of previously createdlearning data sets. These learning sets contain relatedsentences for a particular sense of an ambiguous word. Thesupervised method classifies the new test sentences basedon the probability distributions calculated using theselearning sets.The Knowledge based Method [25–35] depends onexternal knowledge-based resources like online semanticA. R. Pal (&)Department of Computer Science and Engineering, College ofEngineering and Management, Kolaghat, Indiae-mail: chhaandasik@gmail.comD. SahaDepartment of Computer Science and Engineering, JadavpurUniversity, Kolkata, Indiae-mail: neruda0101@yahoo.comN. S. DashLinguistic Research Unit, Indian Statistical Institute, Kolkata,Indiae-mail: nisedash@gmail.comA. PalDepartment of Computer Science and Engineering, NIT,Durgapur, Indiae-mail: antarapal22@gmail.com123J. Inst. Eng. India Ser. Bhttps://doi.org/10.1007/s40031-018-0337-5http://crossmark.crossref.org/dialog/?doi=10.1007/s40031-018-0337-5&amp;domain=pdfhttp://crossmark.crossref.org/dialog/?doi=10.1007/s40031-018-0337-5&amp;domain=pdfdictionaries, thesauri, Machine readable dictionaries, etc. toobtain sense definitions of the lexical components.In Unsupervised Method [34, 36, 37], the sense disam-biguation happens in two phases. First, sentences areclustered using a clustering algorithm and these clusters aretagged with relevant senses with the help of a linguisticexpert. Next, a distance-based similarity measuring tech-nique is used to</s>
<s>find the closeness of a test data with thesense-tagged clusters. The minimum distance from a sensetagged cluster assigns the sense to that new test data.The present work is based on Naı̈ve Bayes probabilisticmodel which is used as a baseline method for sense clas-sification. This baseline method generates 81% accurateresult when the algorithm tested on 900 instances of 19ambiguous words. Next, two extensions are adopted toincrease the level of accuracy: (a) incorporation oflemmatization in the system that generates 84% accuracy,and (b) operation of Bootstrapping on the system thatproduces 83% accuracy.The organization of the paper is as follows: Sect. 2presents a brief Survey in this research methodology;Proposed Approach is demonstrated in Sect. 3; Results andDiscussion is presented in Sect. 4; in Sect. 5, Extensions onthe Baseline Methodology is described in detail. The reportis concluded with future scope in Sect. 6.SurveyIn case of the Supervised Method, manually createdlearning sets are used to train the model. The learning setsconsist of example sentences relating to a particular senseof a word. The test instances are classified based on theirprobability distribution calculated using the learning sets.Some commonly used approaches are deployed in thismethod, which are discussed below:Decision ListIn Decision List [35, 36] based approach, first, a set of rulesare formed for a target word. Next, few example sentencesare fed to the system to calculate the decision parameterslike feature-value, sense-score, etc. When a test data comesfor classification task, these feature values categorize thatdata to a particular class using these parameters.Decision TreeThe Decision Tree [38–40] based approach frames therules in the form of a tree structure where the non-leafnodes denote the tests and the branches represent the testresults. The leaf nodes of the tree carry the different senses.If a set of rules can guide an execution to a leaf node thenthe sense is assigned to that word as a derived sense.Naïve BayesThe Naı̈ve Bayes [41–43] probabilistic model classifies theinstances based on few parameters. These parameters cal-culate the probability distribution of a particular instancew.r.t. the different classifiers. The classifier, for which theprobability value is the maximum for a test instance, cat-egorizes the instance accordingly. The formula for theNaı̈ve Bayes classification is as follows:Ŝ ¼ ARGMAXSj2SenseDðwÞPðSi f1; . . .; fmj Þ¼ ARGMAXSj2SenseDðwÞPðf1; . . .; fm Sij ÞPðSiÞPðf1; . . .; fmÞwhere ‘Si’ represents different senses of the ambiguousword (w), the parameter ‘fj’ represents the features of theword (w) in the context (Si) and m is the number offeatures.Neural NetworkIn Neural Network based approach [44–47], the artificialneurons act as the data processing units. The artificial neu-rons categorize the features into a number of non-overlap-ping sets.While designing a network using artificial neurons,these are arranged in different layers and the data is passedthrough these layers to reach the destination layer. In a net-work, words are treated as nodes and relations among thewords are considered as links. In a network when data pro-ceeds, only those links get activated where the two words atthe two end points of an edge are semantically related.Exemplar-Based MethodIn Exemplar-Based [48] method, examples are consideredas points distributed over a</s>
<s>feature space. When a new datapoint comes to be categorized, any distance based simi-larity measuring technique is used to find the closeness ofthe data point w.r.t. all the other classifiers. The minimumdistance w.r.t. a particular classifier represents the sense ofthe test data.Support Vector MachineIn Support Vector Machine based [49–51] method, exam-ples are treated as polarized points, either positive ornegative. The goal of the methodology is to separate thesepositive and negative points w.r.t. a hyper-plane. A testdata is classified by evaluating, at which side of the hyper-plane the point belongs to.J. Inst. Eng. India Ser. B123Ensemble MethodsIn Ensemble Method based [52] approach, classifiers arecombined after every execution for a better classificationresult. This combination occurs according to differentparameters, such as, Majority Voting, Probability Mixture,Rank-Based Combination, AdaBoost [53, 54] etc.Proposed ApproachThe proposed approach adopts the Naı̈ve Bayes (NB)probabilistic model as a baseline strategy. This modelclassifies the instances based on few predefined parameters.Module 1: Training ModuleDevelopment of the training model depends on the fol-lowing parameters:a. |V| which represents the number of vocabulary,b. P(ci), to calculate the priori probability of each class,c. ni, carries the total numbers of word frequencies ofeach class,d. P(wi|ci) which represents the conditional probability ofa keyword in a given class.The “zero frequency” problem is resolved using theLaplace Estimation in the following way:P wijcið Þ ¼ Number of occurrences of eachðword in a given classþ 1Þ= niþ jVjð Þ:Module 2: Testing ModuleA test data is classified with the help of “posterior” proba-bility, P(ci|W) w.r.t. each class using the following formula:P cijWð Þ ¼ P cið Þ �XVj jj¼1P Wjj cið ÞThe highest probability measure assigns a test data to aparticular class.Flow Chart of the Baseline MethodThe baseline method can be represented through the fol-lowing diagram (Fig. 1).Results and DiscussionThe following steps have been executed to run the systemon the database:Text NormalizationThe texts stored in the TDIL Bangla corpus are non-nor-malized in nature. So, the very first task was to normalizethe texts adequately by (a) removing uneven number ofspaces, new lines, etc., (b) discarding comma, colon, semicolon, double quote, single quote and all other orthographicsymbols, (c) converting the whole texts into Unicodecompatible single Bangla font (Vrinda in this work),(d) considering all types of Bangla sentence terminationsymbols as note-of-exclamation, note-of-interrogation andpurnacched (full stop) (“।”).Removal of Non-functional WordsIn NLP works there is not any specific rule or process fordifferentiating between functional word and non-functionalwords. Rather, it is more or less based on the nature ofapplication of a NLP work. Although, in practical sense, allBangla words are useful in some contexts or the other,while preparing the data sets for the present work, fewBangla words have been ignored to keep the number ofwords within a manageable length. After lemmatization,words except nouns, pronouns, adjectives, verbs andadverbs (in Bangla, adverbs are also treated as a kind ofadjective) are considered functional words.Selection of Ambiguous WordTheoretically it is possible to assume that any Bangla wordcan appear in a text with certain level of ambiguity. Peopleof computational linguistics like to use a few constraintsfrom implementation perspective to select the ambiguouswords. The Bangla</s>
<s>text corpus used in this work consists of35,89,220 inflected and non-inflected words, among which199,245 words may be treated as distinct lexical units.These words are first arranged in decreasing orderaccording to their term frequency in the corpus. The mostfrequently used words are then selected for experimentThe TDIL Bengali corpusSentences carrying a target word retrieved programmaticallySet of non-normalized sentencesManual text normalizationModule 1: Training ModuleModule 2: Testing ModuleOutput: Disambiguated senseThe TDIL Bengali corpusSentences carrying a target word retrieved programmaticallySet of non-normalized sentencesManual text normalizationModule 1: Training ModuleModule 2: Testing ModuleOutput: Disambiguated senseFig. 1 Flow chart of the proposed baseline approachJ. Inst. Eng. India Ser. B123with some necessary pre-requisite conditions as discussedlater.Annotation of an Input DataThe sentences in the test data set are annotated in thefollowing way:\Sentence x[ tag at the beginning of each sentencerepresents the sentence number in the paragraph and\wsd_id=y, pos=z[ tag carries the ambiguous wordnumber and Part-of-Speech of the target word in thatparticular sentence (Fig. 2).Preparation of a Reference Output DataThe reference output files have been generated with thehelp of a standard Bangla dictionary (Sansad BanglāAvidhān=Samsad Bangla Dictionary) (Fig. 3). The refer-ence files are used by the system to verify the systemgenerated outputs using a separate program.In the first phase of the work, the baseline method isapplied on 900 sentences containing mostly used 19 Banglaambiguous words.Selection of Senses of the Ambiguous Wordsfor EvaluationAfter retrieving ambiguous words, a set of steps have beendefined and executed to select their multiple senses for theexperiment. The range of sense variation of Bangla wordsis so vast that it appeared as a real challenge to select a fewsenses from them for experiment. For example, accordingto the Sansad Banglā Avidhān, the word “হাত” (hāt) candenote more than 80 (eighty) different senses both in itssingular and inflected forms, whereas the on-line BanglaWordNet sites only 14 (fourteen) distinct senses for theword. On the contrary, the TDIL Bangla text corpus pro-vides only 4 (four) different senses of the word with someneedful numbers of sentences. Taking all these variationsinto consideration the threshold value has been consideredas 5 for the present work.The following algorithm evaluates the multiple senses ofan ambiguous word for the experiment:Algorithm: Sense-SelectionInput: Sentences from a corpus containing an ambiguousword.Output: Multiple senses of the ambiguous words.Step 1: Sentences, classified based on contextual words.Step 2: Misclassified sentences are rectified by an expert.Step 3: Sense inventory is prepared for the ambiguousword based on Sansad Bangla Abhidhan and BanglaWordNet.Step 4: Specific senses are tagged to the sentence classesfrom the sense inventory.Step 5: Sense tagged classes are rearranged according tothe decreasing number of sentences in it.Step 6: Classes containing sentences more than athreshold value are selected.Step 8: Senses associated with the classes are consideredfor evaluation.The selected senses obtained by this algorithm are listed inTable 1.Parameters for Evaluating the PerformanceThe performances of the algorithms have been measuredusing the conventional parameters: Precision, Recall, andF-Measure.Fig. 2 Partial view of a sampleinput filePrecision Pð Þ ¼ number of correctly evaluated instances according to human decisionð Þ= total number of solved instances by the systemð Þ:Recall Rð Þ ¼ number</s>
<s>of correctly evaluated instances according to human decisionð Þ= total number of data instancesð Þ; andF - Measure ¼ 2 � P � R= Pþ Rð Þ:J. Inst. Eng. India Ser. B123Through the work, the systems evaluated all the testinstances either correctly or wrongly which result the samePrecision and the Recall value for each data.Baseline ResultThe typical Naı̈ve Bayes algorithm has been developedas a baseline for this work. The algorithm has evaluated 19mostly used Bangla ambiguous words with the same Pre-cession and Recall value of 81% on an average (Table 2).Extensions on the Baseline MethodologyTo enhance the performance of the baseline methodology,the following two extensions have been adopted:(a) Lemmatization of inflected forms the whole system, and(b) Bootstrapping.Lemmatization of the Whole SystemSince Bangla is morphologically very strong, only lexicalmatching is not adequate enough for measuring the simi-larity of senses between the words. To overcome thisbottleneck, the whole system has been operated on thelemmatized forms of words. The expansion of lexicalcoverage due to lemmatization generates situation wheremore number of lexical similarity are observed between theFig. 3 Partial view of areference output dataTable 1 Selected senses of the ambiguous wordsWord Selected sensesāthā mastak (head), chintā (thought), prānta (edge)ghar griha (home), sansār karā (to live family life), bansha(family)mane manan (mind), bodh haoyā (assuming), mane dharā (liking)pā pā (leg), padaksep (step), padārpan (to keep foot), eksangecalā (move together), phense yāoyā (to be trapped)tolā uttolan karā (pick), utthāpan karā (propose), arpan karā(give), pratyāhār karā (withdraw), sristi karā (design),sangraha karā (collet)jal bāri (water), ashru (tear), jive jal (saliva), ghatanā prabāha(flow of event)mānush byakti (person), nar (homo sapiens), lālan pālan (nourish)parā parāshunā karā (study), patan (fall)hāt hasta (hand), abadān (contribution), hāt pātā (beg), hāt badal(exchaneg)yog yogfal (add), samparka (relation), yogdān karā (participate)mukh badan (face), mukh bibar (mouth), prānta (opening)shabda akshar (word), dhwanee (sound)din din (day), deyā (give), pratidin (everyday), din kātāno (life-living)kāj kārya (work), kartabya (duty), hasta-shilpa (handicraft)nām nām (name), sunām (fame), nāmgān (chant)samay ksan (moment), kāl (in time), abasar (leasure time)dhare dhāran karā (hold), ākraman karā (attack), yābat (during)kāchhe kāchākāchi (near), prati (to), bicāre (according to)niche neecu (down), kam (less), parabarti ansha (next section)Table 2 Execution of the baseline modelWord No. of sentences Accuracy achieved (%)ghar 50 84mane 55 80pā 50 82tolā 30 82jal 77 83mānush 50 78parā 50 86hāt 20 80yog 50 84mukh 50 72shabda 50 86din 20 80kāj 50 82nām 50 86samay 50 82dhare 53 82kāchhe 56 83niche 51 74māthā 30 83Total 892 81.5J. Inst. Eng. India Ser. B123instances which, eventually, leads the system to act in a farmore robust manner to achieve higher level of accuracy.The lemmatization tool operated on the training sets, testdata, and vocabulary (features) in a uniform mannerwithout any selectional bias. However, since the tool couldnot produce accurate results for all the words, which isbound to happen due to complexities involved in the sur-face forms of many inflected Bangla words, manualintervention has been necessary for rectifying some of theerrors in the eventual output database. A glimpse of thesample lemmatized input data is presented below (Fig. 4)where annotation of the text follows</s>
<s>the same strategy as inthe baseline method in addition to the words derived fromlemmatization. Words are represented in the followingformat: “word-in-surface-level/stem-form/POS”.This expansion approach uses the same standard outputfiles used in the baseline experiment. Though the inputshave been prepared in lemmatized form, the outputs havebeen generated in surface level forms of the words toconduct a similar comparison with the baseline approach.In the following table (Table 3) the performance of thealgorithm on a regular data and its corresponding lemma-tized form is presented.It is observed that the overall accuracy has been increaseddue to the expansion of lexical coverage of the words. Sincethe size of the data sets taken for the experiment is quitesmaller in number, at several occasions, the algorithm hasreturned the same accuracy. As mentioned earlier, the Pre-cession and Recall both the values are 84% in this phase overa baseline accuracy of 82% on a same data set.BootstrappingIn this extended methodology the sense-resolute test data ina particular phase of execution is inserted into the trainingsets to enrich the learning procedure. As the training setsbecome stronger in every execution, the system can pro-duce a better accuracy in its next executions. A smallmanual intervention was mandatory in the phase also.Since the classification of a data set depends on the prob-ability measures based on the training sets, the methodol-ogy requires a correctly populated training set for senseretrieval. Since the proposed model could not produce anabsolute result in a particular execution, the misclassifiedinstances have been further rectified by manual interven-tion to lead the system towards a right direction (Fig. 5).In this phase two consecutive executions have beenconsidered. In the first phase, the module has been testedon a selected set of data from the Bangla corpus. After thetraining sets are auto-incremented, a new set of data hasbeen selected for the experiment for the second phase. Theaccuracy of the result in both the phases is presented inTable 4. The Precession and Recall values are same as 83%over a base line accuracy of 81.5%.It is observed that extensions on the baselinemethodologycan produce a better result in most of the cases (Tables 3, 4).However, at a few cases, the accuracy level has slightlydropped. Through investigation it is observed that theaccuracy of the system depends on a few predefinedparameters such as, vast varieties in sentence representationof any particular sense, occurrence of same lexical entries insemantically dissimilar sentences, and many more.Fig. 4 A sample lemmatizedinput dataTable 3 Performance of the algorithm on a regular data and itscorresponding lemmatized formWord No. ofsentencesAccuracy of result innon lemmatized system(%)Accuracy of result inlemmatized system(%)jal 77 83 86samay 50 82 84hāt 20 80 85jog 50 84 85mānuSh 50 78 78din 20 80 85ghar 50 84 84māthā 30 83 86dhare 53 82 82shabda 50 86 86tolā 30 82 86fale 50 83 82Total 892 82 84J. Inst. Eng. India Ser. B123Conclusion and Future ScopeIn this paper the work for Word Sense Disambiguation inBangla language has been proposed using the Naı̈ve Bayesalgorithm as a baseline method supported with twoextensions, namely, lemmatization and bootstrapping. Theresults obtained from this work, although not</s>
<s>exact to ourexpectation, may be accepted for the time being on theground that this is the first attempt of this kind and thismethod may help us to devise new strategies for achievingour goals. In reality the complex linguistic natures of theSouth Asian languages like Hindi, Bangla, Tamil, Telugu,Punjabi, Malayalam and Marathi etc. usually put before usseveral challenges in the form of fonts, texts, morpholog-ical complexities, etc. due to which achieving even slightbreakthrough in computation of these languages become areal challenge for many of us. At the same time the vari-ation of senses of words, diversities in sentence structures,and complex formation of functional and nonfunctionalwords etc. demand additional attention for achieving betterresult from such experiments.References1. N. Ide, J. Véronis, Word sense disambiguation: the state of theart. Comput. Linguist. 24(1), 1–40 (1998)2. R. Florian, S. Cucerzan, C. Schafer, D. Yarowsky, Combiningclassifiers for word sense disambiguation. Nat. Lang. Eng. 8(4),327–341 (2002)3. M.S. Nameh, M. Fakhrahmad, M.Z. Jahromi, A New approach toword sense disambiguation based on context similarity, in Pro-ceedings of the World Congress on Engineering, vol. I (2011)4. W. Xiaojie, Y. Matsumoto, Chinese word sense disambiguationby combining pseudo training data, in Proceedings of The Inter-national Conference on Natural Language Processing andKnowledge Engineering (2003), pp. 138–1435. R. Navigli, Word sense disambiguation: a survey. ACM. Comput.Surv. 41(2), 1–69 (2009)6. M. Sanderson, Word sense disambiguation and informationretrieval, in Proceedings of the 17th Annual International ACMSIGIR conference on Research and Development in InformationRetrieval, SIGIR’94, July 03–06, Dublin (Springer, New York,1994), pp. 142–1517. E. Agirre, P. Edmonds (eds.), Word Sense Disambiguation,Algorithms and Applications, Text Speech and Language Tech-nology, vol 33 (Springer, Netherlands, 2007)8. H. Seo, H. Chung, H. Rim, S.H. Myaeng, S. Kim, Unsupervisedword sense disambiguation using WordNet relatives. Comput.Speech Lang. 18(3), 253–273 (2004)9. G.A. Miller, R. Beckwith, C. Fellbaum, D. Gross, K. Miller,WordNet: an on-line lexical database. Int. J. Lexicogr 3, 235–244(1990)10. S.G. Kolte, S.G. Bhirud, Word sense disambiguation usingWordNet domains, in 1st International Conference on DigitalObject Identifier (2008), pp. 1187–119111. Y. Liu, P. Scheuermann, X. Li, X. Zhu, Using WordNet to dis-ambiguate word senses for text classification, in Proceedings ofthe 7th International Conference on Computational Science(Springer, Berlin, 2007), pp. 781–78912. G.A. Miller, R. Beckwith, C. Fellbaum, D. Gross, K.J. Miller,WordNet an on-line lexical database. Int. J. Lexicogr. 3(4), 235–244 (1990)13. G.A. Miller, WordNet: a lexical database. Commun. ACM 38(11), 39–41 (1993)14. A.J. Cañas, A. Valerio, J. Lalinde-Pulido, M. Carvalho, M.Arguedas, Using WordNet for Word Sense Disambiguation toThe Bengali Text CorpusSample data set is retrievedNon-normalized data set generatedData set is normalizedNaïve Bayes rule appliedActual sense resolvedResolved data instances populate the learning sets for further executionFig. 5 Flowchart of the proposed bootstrapping methodTable 4 Result of bootstrapping methodWord No. ofsentencesAccuracy achieved in 1stexecution (%)Accuracy achievedin 2nd execution (%)ghar 50 84 85mane 55 80 84pā 50 82 78tolā 30 82 84jal 77 83 86mānush 50 78 70parā 50 86 86hāt 20 80 81yog 50 84 85much 50 72 79shabda 50 86 85din 20 80 82kāj 50 82 83nām 50 86 86samay 50 82 85dhre 53 82 84kāchhe 56 83</s>
<s>83neeche 51 74 80māthā 30 83 84Total 892 81.5 83J. Inst. Eng. India Ser. B123Support Concept Map Construction. In: String Processing andInformation Retrieval, eds. by M.A. Nascimento, E.S. de Moura,A.L. Oliveira. SPIRE 2003. Lecture Notes in Computer Science,vol 2857 (Springer, Berlin, Heidelberg, 2003) pp. 350–35915. C. Marine, W.U. Dekai, Word sense disambiguation vs. statisticalmachine translation, in Proceedings of the 43rd Annual Meetingof the ACL (Ann Arbor, 2005), pp. 387–39416. http://www.ling.gu.se/sl/Undervisning/StatMet11/wsd-mt.pdf. 14May 201517. http://nlp.cs.nyu.edu/sk-symposium/note/P-28.pdf. 14 May 201518. S.C. Yee, T.N. Hwee, C. David, Word sense disambiguationimproves statistical machine translation, in Proceedings of the45th Annual Meeting of the Association of Computational Lin-guistics (Prague, 2007), pp. 33–4019. R. Mihalcea, D. Moldovan, An iterative approach to word sensedisambiguation, in Proceedings of Flairs 2000 (Orlando, FL,2000), pp. 219–22320. S. Christopher, P.O. Michael, T. John, Word Sense Disambigua-tion in Information Retrieval Revisited, SIGIR’03, July 28–Aug 1,2003 (Canada, Toronto, 2003)21. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.65.6828&rep=rep1&type=pdf. 14 May 201522. http://www.aclweb.org/anthology/P12-1029. 14 May 201523. https://www.comp.nus.edu.sg/nght/pubs/esair11.pdf. 14 May201524. http://cui.unige.ch/isi/reports/2008/CLEF2008-LNCS.pdf. 14May 201525. S. Banerjee, T. Pedersen, An adapted Lesk algorithm for wordsense disambiguation using WordNet, in Proceedings of the ThirdInternational Conference on Intelligent Text Processing andComputational Linguistics (Mexico City, 2002)26. M. Lesk, Automatic sense disambiguation using machine read-able dictionaries: how to tell a pine cone from an ice cream cone,in Proceedings of SIGDOC (1986)27. http://www.dlsi.ua.es/projectes/srim/publicaciones/CICling-2002.pdf. 14 May 201528. K. Mittal, A. Jain, Word sense disambiguation method usingsemantic similarity measures and owa operator. ICTACT J. SoftComput, Special .Issue .Soft. Comput. Theor. Appl. Implications.Eng. Technol. 05(02), 896–904 (2015)29. http://www.d.umn.edu/tpederse/Pubs/cicling2003-3.pdf. 14 May201530. http://www.aclweb.org/anthology/U04-1021. 14 May 201531. http://www.aclweb.org/anthology/C10-2142. 14 May 201532. M.C. Diana, J. Carroll, Disambiguating nouns, verbs, andadjectives using automatically acquired selectional preferences.Comput. Linguist. 29(4), 639–654 (2003)33. Y. Patrick, B. Timothy, Verb sense disambiguation using selec-tional preferences extracted with a state-of-the-art semantic rolelabeler, in Proceedings of the 2006 Australasian LanguageTechnology Workshop (ALTW2006) (2006), pp. 139–14834. http://link.springer.com/article/10.1023/A%3A1002674829964#page-1. 14 May 201535. S. Parameswarappa, V.N. Narayana, Kannada Word sense dis-ambiguation using decision list. Inter. J. Emerg. Trends. Technol.Comput. Sci. 2(3), 272–278 (2013)36. http://www.academia.edu/5135515/Decision_List_Algorithm_for_WSD_for_Telugu_NLP. Accessed 10 Mar 201537. T. Pedersen, in Unsupervised Corpus-Based Methods for WSD,eds. by E. Agirre, P. Edmonds. Word Sense Disambiguation.Text, Speech and Language Technology, vol 33. (Springer,Dordrecht, 2007), pp. 133–16638. R.L. Singh, K. Ghosh, K. Nongmeikapam, S. Bandyopadhyay, Adecision tree based word sense disambiguation system inManipuri language. ACIJ 5(4), 17–22 (2014)39. http://wing.comp.nus.edu.sg/publications/theses/2011/low_wee_urop.pdf. 14 May 201540. http://www.d.umn.edu/tpederse/Pubs/naacl01.pdf. 14 May 201541. C. Le, A. Shimazu, High WSD accuracy using Naive Bayesianclassifier with rich features, in PACLIC 18, Dec 8th–10th, 2004(Waseda University, Tokyo, 2004), pp. 105–11442. http://www.cs.upc.edu/escudero/wsd/00-ecai.pdf. 14 May 201543. N.T.T. Aung, K.M. Soe, N.L. Thein, A word sense disambigua-tion system using Naı̈ve Bayesian algorithm for Myanmar Lan-guage. Int. J. Sci. Eng. Res. 2(9), 1–7 (2011)44. http://crema.di.unimi.it/pereira/his2008.pdf. 14 May 201545. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.13.9418&rep=rep1&type=pdf. 14 May 201546. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.154.3476&rep=rep1&type=pdf. 14 May 201547. http://www.aclweb.org/anthology/W02-1606. 14 May 201548. http://www.aclweb.org/anthology/W97-0323. 14 May 201549. https://www.comp.nus.edu.sg/nght/pubs/se3.pdf. 14 May 201550. D. Buscaldi, P. Rosso, F. Pla, E. Segarra, E.S. Arnal, Verb SenseDisambiguation Using Support Vector Machines: Impact ofWordNet-Extracted Features, ed. by A. Gelbukh. CICLing 2006,LNCS 3878 (2006), pp. 192–19551. http://www.cs.cmu.edu/maheshj/pubs/joshipedersenmaclin.iicai2005.pdf. 14 May 201552. S. Brody, R. Navigli, M. Lapata, Ensemble methods</s>
<s>for unsu-pervised WSD, in Proceedings of the 21st International Confer-ence on Computational Linguistics and 44th Annual Meeting ofthe ACL (Sydney, 2006), pp. 97–10453. http://arxiv.org/pdf/cs/0007010.pdf. 14 May 201554. http://www.aclweb.org/anthology/S01-1017. 14 May 2015J. Inst. Eng. India Ser. B123http://www.ling.gu.se/%7esl/Undervisning/StatMet11/wsd-mt.pdfhttp://nlp.cs.nyu.edu/sk-symposium/note/P-28.pdfhttp://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.65.6828&rep=rep1&type=pdfhttp://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.65.6828&rep=rep1&type=pdfhttp://www.aclweb.org/anthology/P12-1029https://www.comp.nus.edu.sg/%7enght/pubs/esair11.pdfhttp://cui.unige.ch/isi/reports/2008/CLEF2008-LNCS.pdfhttp://www.dlsi.ua.es/projectes/srim/publicaciones/CICling-2002.pdfhttp://www.dlsi.ua.es/projectes/srim/publicaciones/CICling-2002.pdfhttp://www.d.umn.edu/%7etpederse/Pubs/cicling2003-3.pdfhttp://www.aclweb.org/anthology/U04-1021http://www.aclweb.org/anthology/C10-2142http://link.springer.com/article/10.1023/A%253A1002674829964%23page-1http://link.springer.com/article/10.1023/A%253A1002674829964%23page-1http://www.academia.edu/5135515/Decision_List_Algorithm_for_WSD_for_Telugu_NLPhttp://www.academia.edu/5135515/Decision_List_Algorithm_for_WSD_for_Telugu_NLPhttp://wing.comp.nus.edu.sg/publications/theses/2011/low_wee_urop.pdfhttp://wing.comp.nus.edu.sg/publications/theses/2011/low_wee_urop.pdfhttp://www.d.umn.edu/%7etpederse/Pubs/naacl01.pdfhttp://www.cs.upc.edu/%7eescudero/wsd/00-ecai.pdfhttp://crema.di.unimi.it/%7epereira/his2008.pdfhttp://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.13.9418&rep=rep1&type=pdfhttp://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.13.9418&rep=rep1&type=pdfhttp://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.154.3476&rep=rep1&type=pdfhttp://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.154.3476&rep=rep1&type=pdfhttp://www.aclweb.org/anthology/W02-1606http://www.aclweb.org/anthology/W97-0323https://www.comp.nus.edu.sg/%7enght/pubs/se3.pdfhttp://www.cs.cmu.edu/%7emaheshj/pubs/joshi%2bpedersen%2bmaclin.iicai2005.pdfhttp://www.cs.cmu.edu/%7emaheshj/pubs/joshi%2bpedersen%2bmaclin.iicai2005.pdfhttp://arxiv.org/pdf/cs/0007010.pdfhttp://www.aclweb.org/anthology/S01-1017 Word Sense Disambiguation in Bangla Language Using Supervised Methodology with Necessary Modifications Abstract Introduction Survey Decision List Decision Tree Na&#239;ve Bayes Neural Network Exemplar-Based Method Support Vector Machine Ensemble Methods Proposed Approach Results and Discussion Text Normalization Removal of Non-functional Words Selection of Ambiguous Word Annotation of an Input Data Preparation of a Reference Output Data Selection of Senses of the Ambiguous Words for Evaluation Algorithm: Sense-Selection Parameters for Evaluating the Performance Extensions on the Baseline Methodology Lemmatization of the Whole System Bootstrapping Conclusion and Future Scope References</s>
<s>See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/334331114Developing A Bangla WordNet: The Word Clustering ApproachThesis · September 2018CITATIONSREADS1033 authors:Some of the authors of this publication are also working on these related projects:NLP/ML projects View projectBangla Question-Answering System View projectNafisa NowshinShahjalal University of Science and Technology3 PUBLICATIONS 8 CITATIONS SEE PROFILEZakia RituShahjalal University of Science and Technology3 PUBLICATIONS 8 CITATIONS SEE PROFILEMd Mahadi Hasan NahidShahjalal University of Science and Technology16 PUBLICATIONS 43 CITATIONS SEE PROFILEAll content following this page was uploaded by Md Mahadi Hasan Nahid on 07 August 2019.The user has requested enhancement of the downloaded file.https://www.researchgate.net/publication/334331114_Developing_A_Bangla_WordNet_The_Word_Clustering_Approach?enrichId=rgreq-67a313d6d997406237af86edbf626278-XXX&enrichSource=Y292ZXJQYWdlOzMzNDMzMTExNDtBUzo3ODkxNzU1ODIzNDcyNjRAMTU2NTE2NTUyOTIzMQ%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/334331114_Developing_A_Bangla_WordNet_The_Word_Clustering_Approach?enrichId=rgreq-67a313d6d997406237af86edbf626278-XXX&enrichSource=Y292ZXJQYWdlOzMzNDMzMTExNDtBUzo3ODkxNzU1ODIzNDcyNjRAMTU2NTE2NTUyOTIzMQ%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/project/NLP-ML-projects?enrichId=rgreq-67a313d6d997406237af86edbf626278-XXX&enrichSource=Y292ZXJQYWdlOzMzNDMzMTExNDtBUzo3ODkxNzU1ODIzNDcyNjRAMTU2NTE2NTUyOTIzMQ%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Bangla-Question-Answering-System?enrichId=rgreq-67a313d6d997406237af86edbf626278-XXX&enrichSource=Y292ZXJQYWdlOzMzNDMzMTExNDtBUzo3ODkxNzU1ODIzNDcyNjRAMTU2NTE2NTUyOTIzMQ%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-67a313d6d997406237af86edbf626278-XXX&enrichSource=Y292ZXJQYWdlOzMzNDMzMTExNDtBUzo3ODkxNzU1ODIzNDcyNjRAMTU2NTE2NTUyOTIzMQ%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Nafisa_Nowshin?enrichId=rgreq-67a313d6d997406237af86edbf626278-XXX&enrichSource=Y292ZXJQYWdlOzMzNDMzMTExNDtBUzo3ODkxNzU1ODIzNDcyNjRAMTU2NTE2NTUyOTIzMQ%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Nafisa_Nowshin?enrichId=rgreq-67a313d6d997406237af86edbf626278-XXX&enrichSource=Y292ZXJQYWdlOzMzNDMzMTExNDtBUzo3ODkxNzU1ODIzNDcyNjRAMTU2NTE2NTUyOTIzMQ%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Shahjalal_University_of_Science_and_Technology?enrichId=rgreq-67a313d6d997406237af86edbf626278-XXX&enrichSource=Y292ZXJQYWdlOzMzNDMzMTExNDtBUzo3ODkxNzU1ODIzNDcyNjRAMTU2NTE2NTUyOTIzMQ%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Nafisa_Nowshin?enrichId=rgreq-67a313d6d997406237af86edbf626278-XXX&enrichSource=Y292ZXJQYWdlOzMzNDMzMTExNDtBUzo3ODkxNzU1ODIzNDcyNjRAMTU2NTE2NTUyOTIzMQ%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Zakia_Ritu?enrichId=rgreq-67a313d6d997406237af86edbf626278-XXX&enrichSource=Y292ZXJQYWdlOzMzNDMzMTExNDtBUzo3ODkxNzU1ODIzNDcyNjRAMTU2NTE2NTUyOTIzMQ%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Zakia_Ritu?enrichId=rgreq-67a313d6d997406237af86edbf626278-XXX&enrichSource=Y292ZXJQYWdlOzMzNDMzMTExNDtBUzo3ODkxNzU1ODIzNDcyNjRAMTU2NTE2NTUyOTIzMQ%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Shahjalal_University_of_Science_and_Technology?enrichId=rgreq-67a313d6d997406237af86edbf626278-XXX&enrichSource=Y292ZXJQYWdlOzMzNDMzMTExNDtBUzo3ODkxNzU1ODIzNDcyNjRAMTU2NTE2NTUyOTIzMQ%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Zakia_Ritu?enrichId=rgreq-67a313d6d997406237af86edbf626278-XXX&enrichSource=Y292ZXJQYWdlOzMzNDMzMTExNDtBUzo3ODkxNzU1ODIzNDcyNjRAMTU2NTE2NTUyOTIzMQ%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Mahadi_Nahid2?enrichId=rgreq-67a313d6d997406237af86edbf626278-XXX&enrichSource=Y292ZXJQYWdlOzMzNDMzMTExNDtBUzo3ODkxNzU1ODIzNDcyNjRAMTU2NTE2NTUyOTIzMQ%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Mahadi_Nahid2?enrichId=rgreq-67a313d6d997406237af86edbf626278-XXX&enrichSource=Y292ZXJQYWdlOzMzNDMzMTExNDtBUzo3ODkxNzU1ODIzNDcyNjRAMTU2NTE2NTUyOTIzMQ%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Shahjalal_University_of_Science_and_Technology?enrichId=rgreq-67a313d6d997406237af86edbf626278-XXX&enrichSource=Y292ZXJQYWdlOzMzNDMzMTExNDtBUzo3ODkxNzU1ODIzNDcyNjRAMTU2NTE2NTUyOTIzMQ%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Mahadi_Nahid2?enrichId=rgreq-67a313d6d997406237af86edbf626278-XXX&enrichSource=Y292ZXJQYWdlOzMzNDMzMTExNDtBUzo3ODkxNzU1ODIzNDcyNjRAMTU2NTE2NTUyOTIzMQ%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Mahadi_Nahid2?enrichId=rgreq-67a313d6d997406237af86edbf626278-XXX&enrichSource=Y292ZXJQYWdlOzMzNDMzMTExNDtBUzo3ODkxNzU1ODIzNDcyNjRAMTU2NTE2NTUyOTIzMQ%3D%3D&el=1_x_10&_esc=publicationCoverPdfShahjalal University of Science and TechnologyDepartment of Computer Science and EngineeringDeveloping A Bangla WordNet: The Word ClusteringApproachNafisa NowshinReg. No.: 20133310334th year, 2nd SemesterZakia Sultana RituReg. No.: 20133310454th year, 2nd SemesterDepartment of Computer Science and EngineeringSupervisorMd Mahadi Hasan NahidLecturerDepartment of Computer Science and EngineeringShahjalal University of Science and TechnologySylhet - 3114, BangladeshSeptember 8, 2018Developing A Bangla WordNet: The Word Clustering ApproachA Thesis submitted to theDepartment of Computer Science and EngineeringShahjalal University of Science and TechnologySylhet - 3114, Bangladeshin partial fulfillment of the requirements for the degree ofBachelor of Science in Computer Science and EngineeringNafisa NowshinReg. No.: 20133310334th year, 2nd SemesterZakia Sultana RituReg. No.: 20133310454th year, 2nd SemesterDepartment of Computer Science and EngineeringSupervisorMd Mahadi Hasan NahidLecturerDepartment of Computer Science and EngineeringSeptember 8, 2018Recommendation Letter from Thesis SupervisorThe thesis entitled” Developing A Bangla WordNet: The Word Clustering Approach ”submitted by the students1. Nafisa Nowshin, 20133310332. Zakia Sultana Ritu, 2013331045is a record of research work carried out under my supervision and I, hereby, approve that the re-port be submitted in partial fulfillment of the requirements for the award of their Bachelor Degrees.Signature of the Supervisor:Name of the Supervisor: Md Mahadi Hasan NahidDate: September 8, 2018Certificate of Acceptance of the ThesisThe thesis entitled” Developing A Bangla WordNet: The Word Clustering Approach”submitted by the students1. Nafisa Nowshin, 20133310332. Zakia Sultana Ritu, 2013331045on September 8, 2018is, hereby, accepted as the partial fulfillment of the requirements for the award of their BachelorDegrees.Head of the Dept.Dr Mohammad Reza SelimProfessor & HeadDepartment of ComputerScience and EngineeringChairman, Exam. CommitteeDr Mohammad Reza SelimProfessorDepartment of ComputerScience and EngineeringSupervisorMd Mahadi Hasan NahidLecturerDepartment of ComputerScience and EngineeringAbstractIn this thesis report, we are proposing a method of constructing a BanglaWordNet. AWordNetcan be described as a semantic network of words where all the words of a language are connectedwith each other through semantic relations. This database is derived from various sources. Thesource used by us is a Bangla corpus constructed from sources like Bangla wikipedia pages, Banglaonline newspaper articles etc. Each WordNet groups word meanings in different ways dependingon the construction method. The method we are proposing mainly focuses on the relationship ofwords having the same meaning and being used in a sentence in place of one another. WordNethas many scopes of improving and contributing to many NLP related works like search enginesand information retrieval systems, word sense disambiguation, text mining, automatic text classi-fication, automatic text summarization etc.Keywords: Natural Language Processing(NLP) , machine learning, deep learning, neural net-work,</s>
<s>word cluster, word2vec.-I-AcknowledgementsWe would like to thank the Department of Computer Science and Engineering, Shahjalal Uni-versity of Science and Technology, Sylhet 3114, Bangladesh, for supporting this research. We arealso grateful to numerous authors of previous works for their cooperation and support.We would like to express our heartiest gratitude to our advisor Md Mahadi Hasan Nahid forthe constant support and inspiration he provided us for our Bachelor Thesis study and research.His patience, motivation, supervision and vast knowledge were our thorough guide till the end.We also want to mention another name, Sabir Ismail sir, for his outstanding guide and support.He is an inspiration to us. He guided us, helped us, and mostly kept us motivated always. A veryspecial thanks to him.-II-DedicationWe would like to dedicate our research to our parents. We are also grateful to anonymousauthors of previous works for their co-operation and support.-III-ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IAcknowledgement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IIDedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IIITable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IVList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . VIList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . VII1 Introduction 11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.3 Report Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . .</s>
<s>. . . . . . 52 Background Study 62.1 Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.2 WordNets In Other Languages . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.3 Uses of WordNet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 Methodology 153.1 Data Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163.2 Previous approach on word embedding . . . . . . . . . . . . . . . . . . . . . . . 173.3 Our approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183.3.1 Vector representation of words . . . . . . . . . . . . . . . . . . . . . . . 183.3.2 Pre-processing steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193.3.3 The word2vec model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-IV-3.3.4 FastText Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233.3.5 Dictionary Parsing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243.3.6 Hierarchy Building . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253.3.7 Adding Details to Hierarchy Structure . . . . . . . . . . . . . . . . . . . 264 Result Analysis 284.1 Experiment I: Word2vec in Tensorflow . . . . . . . . . . . . . . . . . . . . . . . 294.2 Experiment II: Word2vec from Gensim package (Skip-gram model) . . . . . . . 304.3 Experiment II: Word2vec from Gensim package (CBOW model) . . . . .</s>
<s>. . . . 314.4 Experiment III: FastText Skip-gram model . . . . . . . . . . . . . . . . . . . . . 324.5 Experiment III: FastText CBOW model . . . . . . . . . . . . . . . . . . . . . . 334.6 Training Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344.7 Comparing The Word Embedding Models . . . . . . . . . . . . . . . . . . . . . 354.8 Hierarchy Building . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354.9 Adding Details to Hierarchy Structure . . . . . . . . . . . . . . . . . . . . . . . 365 Discussion 385.1 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386 Conclusion 39References 40Appendix 42A Paper Published on Previous Work 43-V-List of Tables2.1 Clusters formed using N-gram approach[1] . . . . . . . . . . . . . . . . . . . . 93.1 Details of the Corpus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174.1 Parameter Tuning for Optimum Results . . . . . . . . . . . . . . . . . . . . . . 284.2 Results from Word2vec in Tensorflow . . . . . . . . . . . . . . . . . . . . . . . 294.3 Results from Word2vec from Gensim package (Skip-gram model) . . . . . . . . 304.4 Results from Word2vec from Gensim package (CBOW model) . . . . . . . . . . 314.5 Results from FastText Skip-gram model . . . . . . . . . . . . . . . . . . . . . . 324.6 Results from FastText CBOW model . . . . . . . . . . . . . . . . . . . . . . . . 334.7 Training Time of the Experiments . . . . . . . . . . . . . . . . . . . . . . . . . 34-VI-List of Figures1.1 Sample structure of a English WordNet . .</s>
<s>. . . . . . . . . . . . . . . . . . . . 21.2 Sample structure of a Bangla WordNet . . . . . . . . . . . . . . . . . . . . . . . 32.1 Block diagram of WordNet system[2] . . . . . . . . . . . . . . . . . . . . . . . 72.2 Proposed method for BanglaNet[3] . . . . . . . . . . . . . . . . . . . . . . . . . 82.3 Linked Indo WordNet structure[4] . . . . . . . . . . . . . . . . . . . . . . . . . 123.1 Histogram of Most Frequent Words with Number of Occurrences . . . . . . . . . 163.2 Vector representation of a text document . . . . . . . . . . . . . . . . . . . . . . 193.3 Example of Hierarchy of Word relations . . . . . . . . . . . . . . . . . . . . . . 253.4 Mapping with dictionary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264.1 Training Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344.2 Hierarchy of Word relations along with cosine similarity . . . . . . . . . . . . . 364.3 Hierarchy of Word relations along with cosine similarity . . . . . . . . . . . . . 37-VII-Chapter 1Introduction1.1 IntroductionBangla is a major world language. And it will only grow more important in the years to comewith the increase in Bangla speaking people all over the globe. Bangla is currently the 7th mostspoken language[5] around the world. As the importance of this language grows so does the re-search works concerning Bangla language. In this era of digital development, more and more focusis being given to digital development of languages and natural language processing is given muchimportance in the field of computing and research. In the case of Bangla language, many researchworks are being conducted on various branches of natural language processing with the goal ofdigitalization and preservation of Bangla language. Many of these research works depend on theavailability of digital resources of the language. These resources include well balanced and avail-able monolingual and parallel corpus, dictionary etc. although Bangla is a widely spoken language,but its resources are not as rich as they should be. So much attention is now being given to theconstruction and</s>
<s>development of these resources.A WordNet can be a very powerful resource for any language. The concept of WordNet wasfirst introduced by Princeton University. They developed the WordNet for the English language,which is now known as the Princeton WordNet[6]. WordNet is a large lexical database of En-glish language. Nouns, verbs, adjectives and adverbs are grouped into sets of cognitive synonyms(synsets), each expressing a distinct concept and provides short definitions and usage examples.These synsets are interlinked bymeans of conceptual-semantic and lexical relations. This results in-1-a network of meaningfully related words and concepts. WordNet’s structure makes it a useful toolfor computational linguistics and natural language processing. An important aspect of the Word-Net is, it not only interlinks just word forms but specific senses of words. The English WordNetincludes and reflects all types of relations between words. It is constructed based on both similarityand antonymity of words. We can visualize it with the help of the figure below.Figure 1.1: Sample structure of a English WordNetBut as the construction of a complete WordNet in a new language is a huge and challengingwork, we are focusing on constructing the BanglaWordNet based on only the semantic relationshipbetween words. We are trying to connect words with other words that have the same meaning as itand can be used in a sentence in place of one another. So, our target is to construct a WordNet thatis connected via synonyms of words. There are many research works focusing on construction ofWordNets on different languages all over the world. But in case of Bangla, although some attemptshave been made to develop a small prototype, no complete BanglaWordNet has been built yet. Theattempts made so far for this includes translating English words in the PrincetonWordNet to Banglaand mapping them to construct the network. A Bangla WordNet will enrich the Bangla language-2-platform. The method we have proposed in this report features a new approach to Bangla WordNetconstruction. It basically starts with the root word, connects its variations and gradually buildingthis way it connects the synonyms.To build a BanglaWordNet, there is a lot of pre processing to do. First, a large dataset coveringtopics of vast areas is needed to find semantic relations between words. We have collected a datasetfor this purpose and applied dynamic word embedding models to construct word embeddings.Different dynamic word embedding models were applied and results were compared to choose anappropriate model for constructing the word clusters that will make the WordNet structure. Thenthrough dictionary parsing the details of the words like meaning, definition, parts of speech etcwere assigned to the connected words present in the WordNet structure.Figure 1.2: Sample structure of a Bangla WordNetComparing both the figures of the English and Bangla WordNet given here, some differencescan be noticed. That is because the little difference in the construction methods of both WordNets.Also ourWordNet does not contain antonyms of words yet and we started working with root wordsand built up from there but the English WordNet does not connect root words rather the synonymsof its variations.AWordNet in itself is a huge</s>
<s>and important resource for any language. But its importance doesnot end there. As a WordNet features word relations and their connections, a lot of information-3-can be utilized from aWordNet regarding any language. This makes it a powerful tool for researchworks. WordNet has use in various sectors of NLP research works. It can help and contributein research works concerning word sense disambiguation, which is the process of determiningthe exact sense of a word used in natural language context. As the WordNet not only stores thesynonyms but also gives idea about the context in which it is used and other words used in thatsame context, it is a valuable resource for word sense disambiguation researches and a completeWordNet can take us miles forward in this sector. Another use of WordNet is in search enginerelated works. Search engines have to predict words and synonyms based on user input and aWordNet can come very handy in doing this. It can also help in information retrieval systems byretrieving conceptual information of each word in the given query context from the WordNet. Incase building an effective automatic text classification, automatic text summarization, text miningsystem etc can also be benefited from the data stored in a WordNet.1.2 MotivationThere are scope of lots of research works to do on Bangla language in the huge field of Naturallanguage processing. In current times, much importance has been given to this sector and it is nowa fast developing sector. Even so, there is still no Bangla WordNet yet. There has been very littlecontribution to this field. But presently it has become really necessary to build a Bangla WordNetin order to provide a strong platform for computerized Bangla language. Since there is no BanglaWordNet yet, our target is to contribute in this sector as much as we can. While attempting toconstruct a Bangla WordNet, we can also shed light on the difficulties faced and the improvementopportunity of the methods applied. We target to present a Bangla WordNet based on the semanticrelationship of words.A big part of constructing a Bangla WordNet is Bangla word embedding. Previous works inthis sector have not yielded much promising results. Many methods have been applied and manyapproaches have failed to increase accuracy. We want to improve the efficiency of Bangla wordembedding methods. Previous methods mostly used n-gram approach for word embedding. Wewant to apply deep learning methods for word embedding to increase efficiency of the process. Inattempting to construct the Bangla WordNet we have tried different models to produce word clus-tering and we can give an overview of their performance. In our work, we are trying an algorithmic-4-approach which the previous works have not explored.1.3 Report StructureThe rest of the chapters is structured as follows:• Chapter 2 reviews some of the related works on Bangla wordnet, Word embedding tech-niques etc. It also throws light on other approaches on WordNet construction in differentlanguages and the uses of a WordNet.• Chapter 3 outlines the methodology adopted for our thesis work. It discusses in full detailour implementation, the experiments done, the</s>
<s>process followed and the steps implementedto complete our work.• Chapter 4 deals with the results we have gotten for our implementations, their comparisonand the decisions we have reached from them.• Discussions based on the construction of Bangla WordNet will be found in chapter 5.• We concluded in Chapter 6.-5-Chapter 2Background StudyAlthough Bangla WordNet is a relatively new topic in the area of Bangla natural languageprocessing, it has grabbed the attention of many researchers in recent times. Some researchershave worked with the goal of developing a complete Bangla WordNet and proposed some methodsfor constructing the WordNet. We will discuss some of these works in this chapter.2.1 Literature ReviewIn this section we give an brief overview on some of the previous research works done onBangla WordNet, what the current situation is in this sector and the scopes of development. Wealso discuss the current and previous word clustering techniques applied on various types of dataand their performance.The development of BWN[2] can be considered the first attempt in developing a BanglaWord-Net. In 2008 Faruqe and Khan proposed this software framework to build and maintain a BengaliWordNet. They presented the design and implementation of the framework. Their approach canbe seen in the figure below.-6-Figure 2.1: Block diagram of WordNet system[2]With the help of a Grinder they converted lexical source files to inject them to the WordNetDatabase. They also developed an interface for the WordNet with key features like querying andediting the data through the interface. They also discussed how this framework can help futuredevelopment of WordNet in other languages.Another approch towards constructing a WordNet was shown by Rahit, Al-Amin, Hasan,Ahmed[3] in the BanglaNet project where they constructed a baseline for Bangla WordNet andconnected it with the Princeton WordNet. They chose a semi-automatic cross lingual sense map-ping approach. The Princeton WordNet synset was aligned into a bi-lingual dictionary through theEnglish equivalent and its parts-of-speech (POS) to achieve that. Their proposed method is shownin the figure below.-7-Figure 2.2: Proposed method for BanglaNet[3]Not only the construction of a Bangla WordNet but also its practical implementations havebeen explored by researchers.In 2017 Pal, Saha and Naskar[7] tried an knowledge based approachto determine the exact sense of a Bengali ambiguous word with the help of Bengali WordNet.Their method was to check for overlap with the dictionary definition of an ambiguous word withits surrounding words in a sentence and their synonyms and the synonyms of the surrounding wordsand determine the exact meaning of that word. They reached an accuracy of 75%.To construct the WordNet, we need to focus on the establishing the semantic relationship ofthe words. Also word clustering techniques will be necessary to group the related words. Manyworks have been done in this sector both in Bangla natural language processing and in variousother languages. We discuss some of these works below.Previous word clustering techniques mostly involved using N-gram model to construct theclusters. This can be observed in the works of Ismail and Rahman [8], who proposed a Banglaword clustering method based on N-gram Language Model. In this paper they tried to</s>