text
stringlengths
61
23.7k
<s>Proceedings of the...S Bandyopadhyay, D S Sharma and R Sangal. Proc. of the 14th Intl. Conference on Natural Language Processing, pages 427–434,Kolkata, India. December 2017. c©2016 NLP Association of India (NLPAI)Study on Visual Word Recognition in Bangla across Different ReaderGroupsManjira SinhaConduent Labs IndiaBangalore, IndiaTirthankar DasguptaTCS Innovation LabsKolkata, India{manjira87, iamtirthankar, anupambas}@gmail.comAnupam BasuIIT KharagpurKharagpur, IndiaAbstractThis paper presents a psycholinguis-tic study of visual word recognition inBangla. The study examines the relation-ship among different word attributes andword reading behaviors of the two tar-get user groups, whose native language isBangla. The different target user groupsalso offer insights into the subjectivity ofwritten word comprehension based on thereaders background. For the purpose ofthe study, reading in terms of visual stimu-lus for word comprehension has been con-sidered. To the best of the knowledge ofthe authors, this study is the first of its kindfor a language like Bangla.1 IntroductionRecognition and understanding of words are ba-sic building blocks and the first step in languagecomprehension. At this stage, the form (visualrepresentation) joins the meaning (conceptual rep-resentation). Therefore, the cognitive load asso-ciated with word reading is a significant contrib-utor to the overall text readability. The presentstudy aims to capture the salience effects of dif-ferent word attributes on the word reading perfor-mance in Bangla, the second most spoken (afterHindi) and one of the official languages of Indiawith about 85 million native users in India 1 . Thefeatures studied in this work, encompass ortho-graphic properties of a word like length in termsof the number of visual units or akshars; numberof unique orthographic shapes i.e, the characteris-tic strokes and complexity measures based on thefamiliarity of the akshars and strokes in a word.Phonological properties of a word such as numberof syllables and spelling to sound consistency have1http://www.ethnologue.com/statistics/sizealso been taken into account along with the seman-tic attributes of a word like number of synonymsand number of senses. Moreover, the feature listalso includes word collocation attributes such asorthographic neighborhood size and phonologicalneighborhood size, which situate the given wordwith respect to other members in the vocabulary.The effects of the word attributes have been mea-sured in terms of the reaction time and perfor-mance accuracy data obtained from empirical userexperiments.The paper is organized as follows: Section 2presents the relevant literature study, section 3describes the participants details; section 4 and5 states data preparation and the psycholinguis-tic experiment respectively; section 6 presents thefeature descriptions and the experimental observa-tions against words and non-words; finally, section7 concludes the paper.2 Related WorksResearch in word recognition has been central tomany areas in cognitive-neuroscience (Frost et al.,2005), educational processes (Seidenberg, 2013),attention (Zevin and Balota, 2000), serial versusparallel processing (Coltheart et al., 1993), con-nectionism (Plaut et al., 1996) and much more.Typically, two different techniques are used tostudy visual word recognition: the lexical decisiontasks and the naming task (Balota et al., 2004). Inlexical decision task, a letter string is presentedto a participants are asked to decide whether thegiven string is a valid word in their language.On the other hand, in the naming task, partici-pants are asked to read allowed a letter string asquickly as possible. The time taken by a</s>
<s>subjectto complete each task after the visual presentationof the target is defined as the response time (RT).An analysis of the reaction times of the subjects427reveals the actual processing of words in brain.The early works in word recognition involves twodistinct models: the activation model or the lo-gogen model (Morton, 1969) and the search model(Forster and Bednall, 1976); both of these twomodels are based on the fundamental premises ofthe frequency effects in word recognition. The fre-quency effect in word recognition claims that thehigh frequency words are recognized more accu-rately and quickly than the low-frequency words(Murray and Forster, 2004). The logogen modelassumes recognition of words in terms of the acti-vation of the constituent linguistic features (calledthe logogens). Each logogen has got a base ac-tivation value (also called the resting activation)that facilitates the recognition process. The restingactivation of a given logogen is determined by itsfrequency of occurrence. That is, high frequencywords have higher base activation value than thelow frequency words. The search model, on theother hand, assumes that words are organized ac-cording to their frequencies and are searched se-rially. (Taft and Hambly, 1986) have a proposedhybrid model that includes features of both theactivation and serial search process. The inter-active activation (IA) model (Diependaele et al.,2010) follows the connectionist approach and alsoincorporates the logogen model. In this frame-work, a word is initially perceived via the basicorthographic, features which in turn activate thehigher level syntactic and semantic features. TheIA model also accounts for the word superiorityeffect that assumes alphabets are recognized moreaccurately and quickly when they occur in a wordas compared to a non-word (Grainger and Jacobs,1996). An important extension of the IA modelis the dual-rout cascaded (DRC) model (Coltheartet al., 2001). This model assumes two paral-lel process of word recognition: the lexical routeand the sub-lexical route. The lexical route ac-counts for the recognition process through the par-allel activation of the orthographic and phonolog-ical features of a word. On the other hand, thesub-lexical route possesses a serial processor thatconverts graphemic representations into phonemicforms. As an alternative to two different process-ing paths n the DRC model, the parallel distributedprocessing model (PDP) (Seidenberg and McClel-land, 1989) has proposed a single architecture toexplain different processing outputs. The modelincorporate the distributed nature by assuming thateach word is associated with some distinct activa-tion pattern across a common set of features usedto recognize the word. The features may include,orthography, phonology, morphology or semantic.Generalizations of the PDP model for non-wordsand irregular words have been proposed by (Plautet al., 1996)3 ParticipantsIn order to understand how the different cognitiveprocesses vary across different user groups, twocategories of users have been considered for eachuser study. Group 1 consists of 25 native usersof Bangla in the age range 21-25 years, who arepursuing college level education and group 2 con-sists of 25 native users in the age range 13 to 17years (refer to figure 1). In this paper, the vari-ations in age and years of education have beentaken into account. Moreover, we have considereda distribution over medium to low socio-economicsections with monthly household income rangesINR 4500 to INR 15000.</s>
<s>The Socio-EconomicClassification (SEC) has been performed accord-ing to the guidelines by the Market Research Soci-ety of India (MRSI) 2. MRSI has defined 12 socio-economic strata: A1 to E3 in the decreasing or-der. The containment of the socio-economic rangewas necessary as it directly affects education, lit-eracy and thus the state of comprehension skills ofa reader. In addition, to capture the first-languageskill, each native speaker was asked to rate his/herproficiency in Bangla on a 1-5 scale (1: very poorand 5: very strong), see figure 2.Figure 1: Participants’ details2http://imrbint.com/research/The-New-SEC-system-3rdMay2011.pdf428Figure 2: Proficiency in the mother tongue4 Data preparationFrom a Bangla corpus 3 of about 400,000 uniquewords, we have sampled 3500 words for the study.The words were selected in such a way that theyrepresent the ‘average’ words over the corpus. Themedian values of word frequency distribution andlength distribution lie at 368 and 5 respectively(refer to figur 3 for some sample words used inexperiment). In a psycholinguistics, to preservethe experimental standard, it is essential to restrictthe participants from making any strategic guessabout the input stimuli. This has been achieved byrandomly introducing non-words in between thevalid words during the experiment. However, de-signing non-words are a non-trivial process, andoften the reader’s response to the different typesof non-words opens up new insights into the pro-cess of word comprehension. Some examples ofnon-words are provided in figure 4.5 Experimental ProcedureWe have conducted lexical decision task (LDT)experiment (Meyer and Schvaneveldt, 1971) tostudy the visual recognition of Bangla words bynative speakers. In this experiment, a participantis presented with a visual input, generally, a stringof letters that can be words, non-words or pseudowords. Their task is to indicate, whether the pre-sented stimulus is a valid Bangla word or not. Thereaction time against each participant and the ac-curacy against each experimental stimulus acrossall the participants are recorded for further analy-sis. The time window for a user to submit any re-sponse has been set at 4 seconds, failing that a No3The Unicode corpus of Bangla was developed by the au-thors as a part of a broader study, the details are not in scopeof this paper.Response is recorded. In either cases it is followedby hash signs (####) followed by the next letterstring with 2.5 second delay. No response againsta stimulus is automatically recorded as wrong re-sponse by the user.Fifty users from the two target user groups par-ticipated in the LDT experiment. The 5000 exper-imental words (2500 words and 2500 non-words)were distributed randomly among 67 equal sized75-word sets. Each user was presented maximumof three sets a day with at least one hour gap be-tween two sets. Before recording experimentaldata, a sample set made up of 20 words was pre-sented to the users to make them accustomed withthe experiment.6 ObservationsAll the incorrect responses and extreme reactiontimes (RT: the time taken to respond to a stimuli)have been discarded. Participants and experimen-tal words having less than 70% accuracy have alsobeen discarded. Finally, 440 words with RT of 42(22 from group 1 and 20 from group 2) partici-pants have been used for further study.The RTs of each user have been normalized byz-transformation</s>
<s>(Balota et al., 2007). The meanz-score over all users for a word has been com-puted. Negative z-scores indicate shorter responselatencies. Paired t-test has been performed be-tween results of the two user groups and p < 0.05has been found signifying the difference betweenreading characteristics of the two user groups.Next, we have studied the influence of differentword features on the outcome of the lexical deci-sion task. The word features studied in this paperhave been selected based on their prominence inthe literature (Yarkoni et al., 2008) and their rele-vance with respect to Bangla. The features are:•Morphological Family size: The morpholog-ical family size of a word w comprises of all theinflected, derived and compound paradigms thatcontains the word w (De Jong IV et al., 2000).•Word length (linear): The length is measuredin terms of the number of visual units or akshars;as Bangla belongs to the abugida group, mere al-phabetic word length does not reflect the difficultyencountered in reading (Sinha et al., 2012b).•Number of complex characters in a word:Complex characters are the consonant conjunctsor jukta-akshars present in a word.•Number of unique shapes in a word:429Figure 3: Examples of valid-words for experiment 4Figure 4: Construction of non-words for experimentBangla script uses the space in a non-linear wayand the akshars hangs from a distinct horizontalhead-stroke called mAtrA. The letters are made upof combinations of different shapes or strokes. Alltogether 57 unique strokes have been identifiedand indexed accordingly. The initial hypothesis isthat more the number of distinct shapes in a word;the more difficult it is to comprehend.•Orthographic word complexity: During vi-sual word recognition, the reader has to recog-nize the orthographic patterns (Selfridge, 1958).Word level representations interact with the letterlevel representations i.e, the characteristic shapesor strokes (refer to). As no standard dataset onFigure 5: characteristics strokes of Bangla aksharsshape combinations in Bangla letters is available,the unique shapes or strokes have been identifiedintuitively across all the Bangla letters includingthe consonant conjuncts. The Bangla Akademifont has been considered as standard Bangla or-thography. All together 57 unique strokes havebeen identified and numbered. Every Bangla letterhas been represented as a combination of the con-stituent shapes. To capture the interactive natureFigure 6: Mapping of Bangla akshars to character-istics shapesof visual complexity, an orthographic complexity430model has been derived in the following way:(a) The difficulty (d(s)) of a characteristic shape(i) or stroke is inversely proportional to its fa-miliarity or frequency (f(s)). The frequencyof the shapes has been calculated from theunique word list of the Bangla corpus with-out considering the frequency of each word.d(s) = 1/f(s). (1)(b) The difficulty (d(a)) of an akshar (a) dependson the sum of the complexity of its shapesnormalized by the number of shapes (n)d(a) = 1/nd(si) (2)(c) Finally, the difficulty (d(w)) of a word (w) isthe sum of the complexity of its constituentakshars normalized by word length (l) andmultiplied by the inverse of the word fre-quency (f(w))d(w) = 1/f(w)1/ld(aj)(3)•Orthographic & phonological neighborhood:We have constructed akshar based, orthographicshape based and phonological pattern basedneighborhood structure. The akshar baseddistance measure treats all akshars as of samevisual complexity regardless of their orthographicproperties, this is the reason distance</s>
<s>amongwords based on orthographic strokes has beentreated separately. At each level of orthographicinformation, the neighbors have been categorizedinto three groups based on their distance from thegiven word.•Number of syllables: The syllabification ofthe Bangla words has been performed using aBangla Grapheme to Phoneme conversion tool,developed inhouse.•Semantic neighborhood: This measurerepresents the number of semantic neighborsof a word within the lexical organization of thelanguage. This is computed from the semanticlexicon described in (Sinha et al., 2012a).The mean and standard deviation values of theword features described above have been pre-sented in figure 7. We have analyzed the RT cor-responding to the above features using Spearman’scorrelation coefficient. The coefficient values be-tween each word attribute and word recognitionperformance for the two user groups have beenpresented in figure 8.From 8 we can observe that the correlationcoefficients values for lexical decision latenciesand decision accuracies are always less than 0.5,though they are different for different groups. Thedifference in the coefficient values may be at-tributed to the different reading patterns of the twogroups. Number of syllables has similar corre-lation coefficients as word length because mostoften the akshars boundaries match the phono-logical syllable boundaries. The measure of or-thographic word complexity possess low correla-tion coefficients with reaction times and accura-cies, this can be an outcome of considering onlythe orthographic attributes of a word, isolating itfrom the phonological or semantic dimensions. Infuture, the measure needs to be augmented withthose word features.Number of unique shapes and complex char-acters also do not show significant correlation.Spelling to sound consistency also has a moder-ate correlation with the groups. This shows thatspeakers are not much sensitive towards the minorinconsistencies in spelling to sound mapping. Thecorrelation coefficients of distant orthographic andphonological neighbors, immediate orthographicneighbors at shape level and semantic neighbor-hood are not significant for both groups. These in-dicate that after a threshold distance, the similar-ity or dissimilar-ity of the given word with otherwords in vocabulary does not affect the readersdecisions. In addition, at shape level, the num-ber of immediate orthographic neighbors may beunimportant due to the fact that often an akshar isconstituted with more than 2 characteristic ortho-graphic shapes and therefore, while reading, suchminor changes in orthographic properties may gounnoticed.Finally, the present calculation of semanticneighborhoods has been based on exhaustive lan-guage information (Sinha et al., 2012c), but theactual users may not possess such deep languageknowledge and therefore are less affected by thesemantic neighborhood structure. On the otherhand, the number of senses or meaning of a worddoes not have inhibitory effect on the decisionmaking process as the no ambiguity had to be re-solved here, instead the use of a word in different431Figure 7: Properties of valid words for experimentcontexts have increased its chance of encounteringwith the native readers of Bangla more often.Moreover, the decisions against non-words areequally interesting to the decisions against thevalid words. Non-words such as kakShataNa [cor-rect: (katakShaNa, time duration) ], AkampIta[correct: (akampita, steady)] and TAlAN [cor-rect: (cAlAna, transaction)] have almost alwaysbeen perceived as correct words by the readers dueto their orthographic and phonological proximityto the correct words. On the other hand, propernon-word i.e, an arbitrary letter string</s>
<s>such as Na-jatathI has been accurately classified as invalid.This indicates that the cognitive processes of read-ing are sensitive to the probability of what aksharpattern can occur in a valid Bangla word.7 ConclusionIn this paper, we have presented a study on thecomprehension difficulty of visual word recog-nition in Bangla stored as a lexical decisiondatabase. Number of interesting observations hasbeen made from the experimental data and the ob-servations have been complemented with rationalinferences based on them. The correlation coef-ficients among word attributes and reaction timedata has revealed that individually no feature has alarge covariance factor, but the collective effect ofall of them determines the cognitive load for com-prehension. Moreover, using a reference languagecorpus based only on text from printed sources hasproven to be a short-coming for drawing meaning-ful inferences. Some initial insights on the deci-sions corresponding to the non-words have alsobeen presented.ReferencesDavid A Balota, Michael J Cortese, Susan D Sergent-Marshall, Daniel H Spieler, and MelvinJ Yap.2004. Visual word recognition of single-syllablewords. Journal of Experimental Psychology: Gen-eral, 133(2):283.D.A. Balota, M.J. Yap, K.A. Hutchison, M.J. Cortese,B. Kessler, B. Loftis, J.H. Neely, D.L. Nelson,G.B. Simpson, and R. Treiman. 2007. The en-glish lexicon project. Behavior Research Methods,39(3):445–459.M. Coltheart, B. Curtis, P. Atkins, and M. Haller. 1993.Models of reading aloud: Dual-route and parallel-distributed-processing approaches. PsychologicalReview; Psychological Review, 100(4):589.Max Coltheart, Kathleen Rastle, Conrad Perry, RobynLangdon, and Johannes Ziegler. 2001. Drc: a dual432Figure 8: Correlation analysis between word attributes and data from LDT (correlation coefficientsmarked with # are not significant (p-value > 0.05)433route cascaded model of visual word recognition andreading aloud. Psychological review, 108(1):204.Nivja H De Jong IV, Robert Schreuder, and R Har-ald Baayen. 2000. The morphological family sizeeffect and morphology. Language and cognitiveprocesses, 15(4-5):329–365.K. Diependaele, J.C. Ziegler, and J. Grainger. 2010.Fast phonology and the bimodal interactive activa-tion model. European Journal of Cognitive Psychol-ogy, 22(5):764–778.Kenneth I Forster and Elizabeth S Bednall. 1976. Ter-minating and exhaustive search in lexical access.Memory & Cognition, 4(1):53–61.Stephen J Frost, W Einar Mencl, Rebecca Sandak,Dina L Moore, Jay G Rueckl, Leonard Katz,Robert K Fulbright, and Kenneth R Pugh. 2005. Afunctional magnetic resonance imaging study of thetradeoff between semantics and phonology in read-ing aloud. NeuroReport, 16(6):621–624.Jonathan Grainger and Arthur M Jacobs. 1996. Or-thographic processing in visual word recognition:a multiple read-out model. Psychological review,103(3):518.David E Meyer and Roger W Schvaneveldt. 1971. Fa-cilitation in recognizing pairs of words: evidence ofa dependence between retrieval operations. Journalof experimental psychology, 90(2):227.John Morton. 1969. Interaction of information in wordrecognition. Psychological review, 76(2):165.Wayne S Murray and Kenneth I Forster. 2004. Serialmechanisms in lexical access: the rank hypothesis.Psychological Review, 111(3):721.David C Plaut, James L McClelland, Mark S Seiden-berg, and Karalyn Patterson. 1996. Understandingnormal and impaired word reading: computationalprinciples in quasi-regular domains. Psychologicalreview, 103(1):56.Mark S Seidenberg and James L McClelland. 1989. Adistributed, developmental model of word recogni-tion and naming. Psychological review, 96(4):523.Mark S Seidenberg. 2013. The science of reading andits educational implications. Language learning anddevelopment, 9(4):331–360.Oliver G Selfridge. 1958. Pandemonium: a paradigmfor learning in mechanisation of thought processes.M. Sinha, T. Dasgupta, and Basu A. 2012a. A complexnetwork analysis of syllables in bangla through syl-lablenet. In</s>
<s>Sobha L Girish Nath Jha, Kalika Bali,editor, Workshop on Indian Language and Data:Resources and Evaluation, LREC, pages 131–138,May.M. Sinha, S. Sharma, T. Dasgupta, and Basu A. 2012b.New readability measures for bangla and hindi texts.Communicated in the 24th International Conferenceon Computational Linguistics,2012, IIT Bombay,August.Manjira Sinha, Abhik Jana, Tirthankar Dasgupta, andAnupam Basu. 2012c. A new semantic lexiconand similarity measure in bangla. In Proceedings ofthe 3rd Workshop on Cognitive Aspects of the Lexi-con, pages 171–182, Mumbai, India, December. TheCOLING 2012 Organizing Committee.Marcus Taft and Gail Hambly. 1986. Exploring the co-hort model of spoken word recognition. Cognition,22(3):259–282.T. Yarkoni, D. Balota, and M. Yap. 2008. Mov-ing beyond coltheart?s n: A new measure of ortho-graphic similarity. Psychonomic Bulletin & Review,15(5):971–979.Jason D Zevin and David A Balota. 2000. Prim-ing and attentional control of lexical and sublexi-cal pathways during naming. Journal of Experimen-tal Psychology: Learning, Memory, and Cognition,26(1):121.434</s>
<s>FULL TITLE HERE IN ALL CAPS IN A FORMATA SYSTEM FOR CHECKING SPELLING, SEARCHING NAME & PROVIDING SUGGESTIONS IN BANGLA WORD MD HAIBUR RAHMAN Student Id: 012102013 A Thesis The Department Computer Science and Engineering Presented in Partial Fulfillment of the Requirements For the Degree of Master of Science in Computer Science and Engineering United International University Dhaka, Bangladesh February, 2018 © MD HABIBUR RAHMAN, 2018 Approval Certificate This thesis titled " Avro for Bangla and its Application to Spelling checker, Transliteration and Name Searching” submitted by Md Habibur Rahman, Student ID: 012102013, has been accepted as Satisfactory in fulfillment of the requirement for the degree of Master of Science in Computer Science and Engineering on 27.02.2018. Board of Examiners ______________________________ Supervisor Prof. Dr. Mohammad Nurul Huda Professor & Coordinator - MSCSE United International University ______________________________ Head Examiner Novia Nurain Assistant Professor, CSE United International University ______________________________ Examiner-I Mohammad Moniruzzaman, Assistant Professor, CSE United International University ______________________________ Examiner-II Suman Ahmmed, Assistant Professor, CSE United International University ______________________________ Ex-Officio Swakkhar Shatabda, Associate Professor United International University Declaration This is to certify that the work entitled “Avro for Bangla and its Application to Spelling checker, Transliteration and Name Searching " is the outcome of the research carried out by me under the supervision of Prof. Dr. Mohammad Nurul Huda. ________________________________________ Md Habibur Rahman Studen ID: 012102013 Department: Computer Science and Engineering In my capacity as supervisor of the candidate’s project, I certify that the above statements are true to the best of my knowledge. ____________________________ Dr. Mohammad Nurul Huda Professor & Coordinator - MSCSE iii Abstract This thesis presents an improved phonetic encoding for Bangla which can be used for spelling checking, transliteration, name searching application as well as cross-lingual information retrieval. To produce an appropriate phonetic code for Bangla is always a significant challenge because of the complex and often inconsistent rules of Bangla words. We propose a phonetic encoding technique for Bangla considering the various Context-sensitive rules which includes the large repertoire of conjuncts in Bangla. Here we used Edit Distance Algorithm, Soundex Algorithm and Metaphone Algorithm for our proposed system. After implementation all of the said algorithms, we will get our targeted word within shortest possible time. ACKNOWLEDGMENTS At the very outset, I would like to express my deep gratitude to Almighty Allah who gave me enough knowledge and patience to complete the thesis within the stipulated time. I would like to give special thanks my supervisor Prof. Dr. Mohammad Nurul Huda for his precious as well as constructive suggestions during the planning and development of this research work. I would also like to extend my thanks to the concerned faculty member of United International University for their kind support throughout the research work. Finally, I wish to thank my parents for their encouragement through my study. Table of Contents LIST OF TABLES .......................................................................................................... vi LIST OF FIGURES ....................................................................................................... vii INTRODUCTION .............................. ……………….Error! Bookmark not defined. PHONETIC ENCODING .............................................................................................. 1 PROPOSED ENCODING ............................................................................................. 7 APPLICATIONS OF PHONETIC ENCODING ........................................................</s>
<s>10 CONCLUSION ........................................................................................................... 34 References.................................................................................................................... 36 LIST OF TABLES Table 1: Soundex encoding table ......................................................................................... 3 Table 2: Phonetic Encoding Table ....................................................................................... 7 Table 3: Table for direct mapping ..................................................................................... 10 Table 4: Example of Edit distance ..................................................................................... 15 Table 5: Performance of Encoding .................................................................................... 16 Table 6: Distribution of Error ............................................................................................ 17 Table 7: Proposed Name searching for Bangla using direct mapping............................... 17 vii LIST OF FIGURES Figure 1: The Soundex algorithm ........................................................................................ 4 Figure 2: Proposed Techniques ......................................................................................... 27 Figure 3: sample output for ফণ .......................................................................................... 28 Figure 4: Sample output ড ই য ....................................................................................... 28 Figure 5: Sample output for ত ভ যয ................................................................................. 29 Chapter I: INTRODUCTION Bangla is one of the widely spoken languages, especially in the Indian Subcontinent. The Bengali spelling rules are very complex in nature. One of the basic reasons for this is its consonant clusters or juktakkhors. Some other notable reasons for its complexity are phonetic similarity of the characters, the difference between the rapheme representation and the phonetic utterances etc. Phonetic encoding for Bangla is always a great challenge for its complex nature of letters or words. The first encoding for Bangla was based on Soundex method which was not able to handle the complexity of Bangla spelling rules. In this particular thesis paper, we will describe phonetic encoding elaborately in Chapter II. Then, we will discuss the scope and importance of our encoding as well as the limitations of other encoding in Chapter III. After that, we will propose our encoding with reasoning in Chapter IV. Next, we will explain the methodology of our new application for Bangla in details in Chapter V. Finally, we will summarize how our new system would perform better than the existing systems. CHAPTER II: PHONETIC ENCODING 2.1. Definition Based on the pronunciation of string, code is done. The input of a phonetic encoding algorithm is a word and the result is an encoded key that should be same for all words which are pronounced similarly that allows for a reasonable amount of fuzziness. As for instance, metaphone encoding gives the code RLS for the word analyze in English. It is known that analyze and analise have the same pronunciation. Hence, a good encoding in English should be able to give the same code RLS to analyze as well. 2.2 Phonetic Encoding for English Various types of approximate string matching algorithms are Soundex, Metaphone, Double metaphone and PHONIX in English. These phonetic matching algorithms make partition the consonants by phonetic similarity then use a single key to encode each set. Only the first few consonant sounds are encoded unless the first letter is a vowel for the said algorithms. 2.3 Soundex Soundex partitions the set of letters into seven disjoint sets assuming that the letters in the same set have similar sound. Each of these sets is given a unique key except for the set containing the vowels and the letters h, w, and y which are considered to be silent and is not</s>
<s>considered during encoding. The Soundex codes are shown in Table 1: Soundex encoding table. The Soundex algorithm transforms all but the first letter of each string into the code, then truncates the result to be at most four characters long. Zeros are added at the end if necessary to produce a four-character code. For example,Washington is coded W-252 (W, 2 for the S, 5 for the N, 2 for the G, remaining letters disregarded), and Lee is coded L-000 (L, 000 added). Soundex deals with small table size and works based on letter-by-letter algorithm. As a result, it is more speedy than other phonetic methods. Table 1: Soundex encoding table Code Letters 0(not coded) A, E, I, O, U, H, W, Y 1 B, F, P, V 2 C, G, J, K, Q, S, X, Z 3 D, T 4 L 5 M, N 6 R 1. Capitalize all letters in the word and drop all punctuation marks. Pad the word with Right most blanks as needed during each procedure step. 2. Retain the first letter of the word. 3. Change all occurrence of the following letters to '0' (zero): 'A', E', 'I', 'O', 'U', 'H', 'W', 'Y'. 4. Change letters from the following sets into the digit given: • 1 = 'B', 'F', 'P', 'V' • 2 = 'C', 'G', 'J', 'K', 'Q', 'S', 'X', 'Z' • 3 = 'D','T' • 4 = 'L' • 5 = 'M','N' • 6 = 'R' 5. Remove all pairs of digits which occur beside each other from the string that resulted after step (4). 6. Remove all zeros from the string that results from step 5.0 (placed there in step 3) 7. Pad the string that resulted from step (6) with trailing zeros and return only the first Four positions, which will be of the form <uppercase letter> <digit> <digit> <digit>. Figure 1: The Soundex algorithm 2.4 Metaphone The Metaphone algorithm analyzes both single consonants and groups of letters called diphthongs based on a set of rules for grouping consonants, then mapping groups to Metaphone codes. The Metaphone Rules Metaphone reduces the alphabet to 16 consonant sounds: B X S K J T F H L M N P R 0 W Y That isn't an O but a zero - representing the 'th' sound. Transformations Metaphone uses the following transformation rules: Doubled letters except "c" -> drop 2nd letter. Vowels are only kept when they are the first letter. B -> B unless at the end of a word after "m" as in “dumb" C -> X (sh) if -cia- or -ch- S if -ci-, -ce- or -cy- K otherwise, including -sch- D -> J if in -dge-, -dgy- or -dgi- T otherwise F -> F G -> silent if in -gh- and not at end or before a vowel in -gn- or -gned- (also see dge etc. above) J if before i or e or y if not double gg K otherwise H -> silent if after vowel and no</s>
<s>vowel follows H otherwise J -> J K -> silent if after "c" K otherwise L -> L M -> M N -> N P -> F if before "h" P otherwise Q -> K R -> R S -> X (sh) if before "h" or in -sio- or -sia- S otherwise T -> X (sh) if -tia- or -tio- 0 (th) if before "h" silent if in -tch- T otherwise V -> F W -> silent if not followed by a vowel W if followed by a vowel X -> KS Y -> silent if not followed by a vowel Y if followed by a vowel Z -> S Initial Letter Exceptions Initial kn-, gn- pn, ac- or wr- -> drop first letter Initial x- -> change to "s" Initial wh- -> change to "w" CHAPTER III: PROPOSED ENCODING We needed to keep few things in our mind while proposing this encoding. We particularly considered the phonetic similarity of letters to give them the same code and also to keep in mind the orthographic or spelling rules as well as to know how letters spell in different context so that we can encode the letters with similar sounding letters considering the context.Using this encoding, anyone would be able to work as an intermediate code in multi-lingual applications. We will be encoding our Bangla letters to a set of Latin alphabets so that it can easily work as an intermediate language to work with English. We assume that the Bangla text is encoded using Unicode Normalization Form C (NFC). 3.1 Proposed phonetic encoding for words We will have two encoding- mainly one for words and a few variations from it for names as well. This section describes about the words encoding. Throughout the thesis paper, we termed our proposed phonetic encoding by Avro phonetic encoding or proposed phonetic encoding. In order to encode Bangla words, we need to consider context and also need to generate multiple codes for the same string. These constraints can be handled in Edit Distance, Soundex and metaphone algorithm, which we did for Bangla here. That’s why, we termed it as metaphone phonetic encoding. 3.2 Phonetic Encoding Following Table 2: Phonetic Encoding table for words is the table of proposed Avro phonetic encoding for words. Followed by the table, there will be reasoning of each of the encoding. Table 2: Phonetic Encoding Table Letter Name ASCII Code O অ 2437 A আ 2438 I ই 2439 I ঈ 2440 U উ 2441 U ঊ 2442 rri ঋ 2443 E এ 2447 OI ঐ 2448 O ঑ 2451 OU ঒ 2452 K ও 2453 kh ঔ 2454 G ক 2455 ghgh খ 2456 ng গ 2457 C ঘ 2458 ch ঙ 2459 J চ 2460 jh ছ 2461 NG ঞ 2462 T ট 2463 Th ঠ 2464 D ড 2465 Dh ঢ 2466 N ণ 2467 T ত 2468 th থ 2469 D দ 2470 dh ধ 2471 N ন 2472 P ঩ 2474 ph প 2475</s>
<s>B ফ 2476 bh ব 2477 m ভ 2478 Z ম 2479 R য 2480 L র 2482 sh ঱ 2486 S ল 2487 s ঳ 2489 a া 2494 i িা 2495 I া 2496 u া 2497 U া 2498 e ো 2503 OI ৈা 2504 O 2507 OU ো 2508 hs া 2509 TH ৎ 2510 R ড় 2524 Rh ঢ় 2525 Y ৞ 2527 o া 2433 ng া 2434 3.3 Existing Phonetic Encoding for Bangla Eighty years old technique of phonetic encoding is new in Bangla which was first proposed by Hoque and Kaykobad in 2002. Then Zaman and Khan, 2004, proposed their version of soundex type Bangla phonetic encoding. Both of the encoding use “soundex” in their encoding name. The cause behind it is they follow the general principal of soundex encoding, to partition the letters in to disjoint sets. CHAPTER IV: APPLICATIONS OF PHONETIC ENCODING Without being properly used in applications, phonetic encoding would not be able to play significant role for a language. Name searching was first such application in which phonetic encoding was used after that spelling checker adopts this phonetic encoding technique. We have used our phonetic encoding in many applications like spelling checker, transliteration, cross-lingual information retrieval and name searching for Bangla. In every case, we will first show how that application were developed earlier, how they perform and then how phonetic encoding improves its performance. 4.1 Translation using Direct Mapping Some software exactly uses this mapping. We are giving a mapping, which we used for our direct mapping transliteration. Since this direct mapping is still a phonetic mapping, the difference is, it will not look up in the dictionary if it has any word with same pronunciation. We have introduced an intermediate encoding which will be used to encode before converting. We need it because in some cases it should not be converted directly, like bool pronounce as bul, hence before mapping we convert “oo” to “u”. One more thing is we will not only consider one letter for one to one mapping, we may sometime consider bigrams for mapping. Because, to represent some Bangla letters phonetically in English we use those bigrams. Like for Bangla letter খ/kh/ we use kh. Table 3: Table for direct mapping case (char)2433: engText.Append("o"); break; //chandra-bindu (char)2434: engText.Append("ng"); break; //onesh-kar case (char)2435: /*engText.Append(":");*/ break; //khandata case (char)2437: engText.Append("o"); break; //'অ' case (char)2438: engText.Append("a"); break; //'আ' case (char)2439: engText.Append("e"); break; //'ই' case (char)2440: engText.Append("E"); break; //'ঈ' case (char)2441: engText.Append("u"); break; //'উ' case (char)2442: engText.Append("U"); break; //'ঊ' case (char)2443: engText.Append("rri"); break; //'ঋ' case (char)2447: engText.Append("e"); break; //'এ' case (char)2448: engText.Append("OI"); break; //'ঐ' case (char)2451: engText.Append("O"); break; //'঑' case (char)2452: engText.Append("OU"); break; //'঒' case (char)2453: engText.Append("k");/*engText.Append("ko");*/break; //'ও' case (char)2454: engText.Append("kh");/*engText.Append("kha");*/break; //'ঔ' case char)2455: engText.Append("g");/*engText.Append("go");engText.Append("G");*/break; //'ক' case (char)2456: engText.Append("GH"); break; //'খ' case (char)2457: engText.Append("N");/*engText.Append("g");*/break; //'গ' case (char)2458: engText.Append("c");/*engText.Append("co");*/break; //'ঘ' case (char)2459: engText.Append("ch");/*engText.Append("CH");*/break; //'ঙ' case (char)2460: engText.Append("j");/*engText.Append("jo");*/break; //'চ' case (char)2461: engText.Append("jh"); break; //'ছ' case (char)2462: /*engText.Append("n");*/ break; //nio case (char)2463: engText.Append("T");/*engText.Append("To");*/break; //'ট' case (char)2464: engText.Append("Th");/*engText.Append("TH");*/break; //'ঠ' case</s>
<s>(char)2465: engText.Append("D");/*engText.Append("Do");*/break; //'ড' case (char)2466: engText.Append("Dh");/*engText.Append("DH");*/break; //'ঢ' case (char)2467: engText.Append("N");/*engText.Append("No");*/break; //'ণ' case (char)2468: engText.Append("t");/*engText.Append("to");*/break; //'ত' case (char)2469: engText.Append("th");/*engText.Append("tho");*/break; //'থ' case (char)2470: engText.Append("d");/*engText.Append("do");*/break; //'দ' case (char)2471: engText.Append("dh"); break; //'ধ' case (char)2472: engText.Append("n");/*engText.Append("no");*/break; //'ন' case (char)2474: engText.Append("p");/*engText.Append("po");*/break; //'঩' case (char)2475: engText.Append("ph");/*engText.Append("f");*/break; //'প' case (char)2476: engText.Append("b");/*engText.Append("bo");*/break; //'ফ' case (char)2477: engText.Append("bh");/*engText.Append("BH");engText.Append("v");*/break; //'ব' case (char)2478: engText.Append("m");/*engText.Append("mo");*/break; //'ভ' case (char)2479: engText.Append("z"); break; //'ম' case (char)2480: engText.Append("r");/*engText.Append("ro");*/break; //'য' case (char)2482: engText.Append("L");/*engText.Append("Lo");*/break; //'র' case (char)2486: engText.Append("sh");/*engText.Append("S");*/break; //'঱' case (char)2487: engText.Append("S");/*engText.Append("h");*/break; //'ল' case (char)2488: engText.Append("s");/*engText.Append("so");*/break; //'঳' case (char)2489: engText.Append("h");/*engText.Append("ho");*/break; //'঴' case (char)2494: engText.Append("a"); break; // a-kar case (char)2495: engText.Append("i"); break; // rossi-kar case (char)2496: engText.Append("I"); break; // dirghi-kar case (char)2497: engText.Append("u"); break; // rossu-kar case (char)2498: engText.Append("U"); break; // dighu-kar case (char)2503: engText.Append("e"); break; //a-kar case (char)2504: engText.Append("OI"); break; //oi-kar case (char)2507: engText.Append("O"); break; //o-kar case (char)2508: engText.Append(","); break; //oaau-kar case (char)2509: /*engText.Append("OU");*/break; //hosonta case (char)2510: engText.Append("t"); break; //khandata case (char)2524: engText.Append("R");/*engText.Append("Ro");*/break; //'ড়' case (char)2525: engText.Append("Rh"); break; //'ঢ়' case (char)2527: engText.Append("Y");/*engText.Append("Yo");*/break; //'৞' case ' ': engText.Append("kkh");break; 4.2 Phonetic mapping In phonetic mapping, the basic idea is to check in the dictionary if we have the word with same pronunciation. Following is the algorithm of phonetic mapping. Algorithm of phonetic mapping if there is a word with the same pronunciation in the dictionary then convert it to that word else if there are multiple words with the same pronunciation in the dictionary then give suggestions for that word and the user will select which one to use else if there are not words with the same pronunciation in the dictionary then convert it using direct mapping Now our main challenge is how we can get the pronunciation of a Bangla word to check it with an English word and understand it has the same pronunciation. We have used the phonetic encoding for Bangla proposed in section 4.1. That encoding encodes Bangla word in to an English word that represents the pronunciation of a word. So, our only challenge is to convert the English words in the same manner so that both encoding are consistent. For example, is encoded in to klm. 4.3 Spelling Checker Spelling Checker may be used for various applications like Optical Character Recognition (OCR), Machine Translation (MT), Natural Language Processing (NLP and so on. 4.4 Spelling error patterns There are two types of word-error such as non-word error and real-word error. Again, there are two types of errors which are typographical error phonetic error. Typographical errors may occur because of typing mistakes, negligence, lack of concentrations and may be for any other reasons. Phonetic errors may be happened because of not knowing the spelling of a desired word although the user knows the pronunciation of the word. In non-word errors, there are mainly two types of errors. One is typographical error and another is phonetic error. Description of typographical error is as follows. In an early study, found that 80% of all misspelled words (non-words errors) in a sample of human keypunched text were caused by single error misspellings: a single one of the following errors: </s>
<s>Substitution error: mistyping the as ther  Deletion error: mistyping the as th  Insertion error: mistyping the as thw  Transposition error: mistyping the as hte These are the type of typographical errors, which occurred due to typing mistakes, negligence, and lack of concentrations or other reasons. But if computer gives a red underline into the word, then we can easily correct it without seeing the spelling suggestions. But scenarios of phonetic errors are different. Phonetic errors occur when the user do not know the spelling of a desired word but knows the pronunciation of the word. So, using the pronunciation the user may write a word but in suggestion it is impossible to get the desired word in case of Bangla, because of complex Bangla rules. 4.5 Approximate string matching algorithm In our Thesis we use Levenshtein Edit Distance method for approximate string-matching algorithm. The algorithm use to check the closeness of dictionary words with the misspelled word. It gives suggestion that is closed to misspelled word. Levenshtein Edit Distance: Definition: The edit distance algorithm is similarity of two strings, s1 and s2, is defined as the minimum number of point mutations required to change s1 into s2, where a point mutation is one of  Insert Letter  Delete Letter  Replace Letter  Transpose Letter Levenshtein Edit Distance algorithm used various reporting purpose like Spell checking, Speech recognition, DNA analysis and Plagiarism detection. Example: e (“kitten”, “sitting”) = 3 Kitten sitten (substitution of “k” with “s”), Sitten sittin (substitution of “e” with “I”), Sittin sitting (insert “g” at the end), For example, we assume our lexicon consist of following words. Our misspelled word is কল. Now when we check the lexicon dictionary we find that there are no such word কল. So, it is a misspelled word according to this dictionary. Now to generate and rank the suggestion, we will generate the edit-distance with all the words of the dictionary. 4.6 How to Rank Performance of Encoding To rank the suggestion, we used both phonetic edit distance, which is edit distance between phonetic codes, and normal edit distance. We did not use the average of both, but preferred for a weighted average. For example, our score = a * phonetic_edit_distance + (1-a) * normal_edit_distance where, a > (1-a). We rank the suggestions according to the scored achieved for a word. Table 4: Example of Edit distance Hence, our ranked suggestion for ওর will be ওর , ও ও, ওথ , ভ র 4.7 Performance of our proposed encoding In our Lexicon Dictionary we have 110750 words for suggestions. Our proposed Encoding performance shows the performance when it used on 1607 commonly misspelled words. Firstly, we apply our encoding to both the correct and misspelled words, after complete the encoding of both word we use Edit Distance Algorithm for minimum distance measure. After implement the Edit Distance algorithm we have found few words which is lowest minimum distance (like=1). The words which is found from Edit Distance</s>
<s>Algorithm at this stage will be implemented this words in Soundex Algorithm. The number of words will be reduced after the implementation of Soundex Algorithm. That means, the words which have less Edit Distance will be brought using Soundex Algorithm. Later on, the words found using Soundex Algorithm will be used in Metaphone Algorithm. After implementation of Metaphone Algorithm, we will get our targeted word it is considered correct if the edit distance is 0. In our case 130 out of 1607 words do not produce an edit distance of 0 with the correct word, which are termed as error, resulting in an accuracy of 91.91%. Table 5: Performance of Encoding Dictionary Word Edit Distance with Word No of Word 1607 Correct (Edit Distance 0) 1477 Error 130 Rate of Accuracy 91.91% Rate of Error 8.08% The numbers of unmatched words fall to 107 and 23 if we consider edit distances of 1 and 2 respectively, as shown in Table 6. Table 6: distribution of Error Error 130 Edit Distance =1 107 Edit Distance =2 23 After complete of our proposed technique we have got some suggestion list of words. It show words suggestion which have Edit Distance >=2. So, we can always get our expected words in suggestion list and more than 91.91% time’s word at the top of the suggestion list. 4.8 Example of transliteration ami bhal achi. Tomar khbor ki? Ajke shndha bela tumi ki Kroch. obak bepar hl, ami ekhon bangla likhte pari English diye. ar mjar bepar hl ami dui vhabe likhte pari. ek`ta daireckT arekta phnetik. Tmar desh e koto taka te Dlar. Ami abar jukt brn likhte pari. Output in direct mapping will be following. আিম ভ লল আিি। েত ম র খবর িক। আজলক সন্ধ্য েবল ত িম িক করি। অব ক বযপ র হল , আিম এখন ব ল িলখলত প ির। অভ্র িিয় । আর মজ র েবপ র হল আিম ি ই ভ লব িলখলত প ির। একট ড ইলরক্ট আলরকট ফলনটিক। েত ম র েিশ এ কত ট ক েত ডল র। আিম এই ভ লব আব র জক্ত বনন িলখলত প ির। Bangle Text: আিম ভ লল আিি। েত ম র খবর িক। আজলক সন্ধ্য েবল ত িম িক করি। অব ক বযপ র হল , আিম এখন ব ল িলখলত প ির। অভ্র িিয় । আর মজ র েবপ র হল আিম ি ই ভ লব িলখলত প ির। একট ড ইলরক্ট আলরকট ফলনটিক। েত ম র েিশ এ কত ট ক েত ডল র। আিম এই ভ লব আব র জ ক্ত বনন িলখলত প ির। Table 7: Proposed Name searching for Bangla using direct mapping private void ShowOutput(List<string> matches, string code, bool isShowMessageBox , System.Windows.Forms.RichTextBox rtb) if(isShowMessageBox) StringBuilder builder = new StringBuilder(); builder.Append("Searching for:\r\n"); builder.AppendFormat("{0} ({1})\r\n\r\n", txtFind.Text, code); if (matches.Count > 0) builder.AppendFormat("Matches found ({0}):\r\n", matches.Count); foreach (string match in matches) builder.AppendFormat("{0}\r\n", match); else builder.Append("No matches found"); MessageBox.Show(builder.ToString()); else StringBuilder builder = new StringBuilder(); builder.Append("Searching for:\r\n"); builder.AppendFormat("{0} ({1})\r\n\r\n", txtFind.Text, code); if (matches.Count > 0)</s>
<s>builder.AppendFormat("Matches found ({0}):\r\n", matches.Count); foreach (string match in matches) builder.AppendFormat("{0}\r\n", match); else builder.Append("No matches found"); rtb.Text = builder.ToString(); #endregion The transformation or rules described in Table 6: Proposed Name searching for Bangla that derived names from the Dictionary using direct mapping, if the inputted word is exists it show the word is match. If the inputted word not exist in Dictionary then it show the word not found in Dictionary. 4.9 Code for Name searching using Dictionary public partial class FormMeasurement : Form private static SpellCheck _Dictionary; public FormMeasurement() InitializeComponent(); #region Events private void buttonBrowse_Click(object sender, EventArgs e) textBoxDictionaryPath.ReadOnly = false; openFileDialog.InitialDirectory = System.IO.Path.GetFullPath(@"..\..\Dictionary"); openFileDialog.Title = "Browse Text Files"; openFileDialog.CheckFileExists = true; openFileDialog.CheckPathExists = true; openFileDialog.DefaultExt = "txt"; openFileDialog.Filter = "Text files (*.txt)|*.txt|All files (*.*)|*.*"; openFileDialog.FilterIndex = 2; openFileDialog.RestoreDirectory = true; openFileDialog.ReadOnlyChecked = true; openFileDialog.ShowReadOnly = true; if (openFileDialog.ShowDialog() == DialogResult.OK) textBoxDictionaryPath.Text = openFileDialog.FileName; textBoxDictionaryPath.ReadOnly = true; private void btnSearch_Click(object sender, EventArgs e) listViewSuggestionList.Items.Clear(); _Dictionary = new SpellCheck(File.ReadAllText(textBoxDictionaryPath.Text), textBoxDictionaryPath.Text.Contains("BD") ? true : false); string source = txtFind.Text; #region Edit Distance List<string> suggestions = _Dictionary.Correct(source); ListViewItem item; foreach (string targetString in suggestions) int distance = EditDistance.Compare(source.ToLower(), targetString.ToLower()); item = new ListViewItem(targetString); item.SubItems.Add(distance.ToString()); listViewSuggestionList.Items.Add(item); #endregion #region Soundex string[] names = suggestions.ToArray(); // List to hold matches List<string> matches = new List<string>(); string code = SearchSoundex(txtFind.Text, names, matches); ShowOutput(matches, code, false, richTextBoxSoundex); #endregion #region Metaphone // List to hold matches matches = new List<string>(); code = SearchMetaphone(txtFind.Text, names, matches); ShowOutput(matches, code, false, richTextBoxMetaphone); #endregion ShowOutput(matches, code, true, null); #endregion #region Soundex private string SearchSoundex(string find, string[] names, List<string> matches) find = ConvertSoundex(find); // Encode string we want to find string code = Soundex.Encode(find); // Search through the list of names foreach (string name in names) string soundex_name = ConvertSoundex(name); // Compare against soundex-encoded version of name if (Soundex.Encode(soundex_name) == code) // Found a match--add it to list //matches.Add(soundex_name); matches.Add(name); return code; private string ConvertSoundex(string text) StringBuilder engText = new StringBuilder(); for (int index = 0; index < text.Length; index++) switch (text[index]) case (char)2433: engText.Append("o"); break; //chandra-bindu case (char)2434: engText.Append("ng"); break; //onesh-kar case (char)2435: /*engText.Append(":");*/ break; //khandata case (char)2437: engText.Append("o"); break; //'অ' case (char)2438: engText.Append("a"); break; //'আ' case (char)2439: engText.Append("e"); break; //'ই' case (char)2440: engText.Append("E"); break; //'ঈ' case (char)2441: engText.Append("u"); break; //'উ' case (char)2442: engText.Append("U"); break; //'ঊ' case (char)2443: engText.Append("rri"); break; //'ঋ' case (char)2447: engText.Append("e"); break; //'এ' case (char)2448: engText.Append("OI"); break; //'ঐ' case (char)2451: engText.Append("O"); break; //'঑' case (char)2452: engText.Append("OU"); break; //'঒' case (char)2453: engText.Append("k");/*engText.Append("ko");*/break; //'ও' case (char)2454: engText.Append("kh");/*engText.Append("kha");*/break; //'ঔ' case (char)2455: engText.Append("g");/*engText.Append("go");engText.Append("G"); */break; //'ক' case (char)2456: engText.Append("GH"); break; //'খ' case (char)2457: engText.Append("N");/*engText.Append("g");*/break; //'গ' case (char)2458: engText.Append("c");/*engText.Append("co");*/break; //'ঘ' case (char)2459: engText.Append("ch");/*engText.Append("CH");*/break; //'ঙ' case (char)2460: engText.Append("j");/*engText.Append("jo");*/break; //'চ' case (char)2461: engText.Append("jh"); break; //'ছ' case (char)2462: /*engText.Append("n");*/ break; //nio case (char)2463: engText.Append("T");/*engText.Append("To");*/break; //'ট' case (char)2464: engText.Append("Th");/*engText.Append("TH");*/break; //'ঠ' case (char)2465: engText.Append("D");/*engText.Append("Do");*/break; //'ড' case (char)2466: engText.Append("Dh");/*engText.Append("DH");*/break; //'ঢ' case (char)2467: engText.Append("N");/*engText.Append("No");*/break; //'ণ' case (char)2468: engText.Append("t");/*engText.Append("to");*/break; //'ত' case (char)2469: engText.Append("th");/*engText.Append("tho");*/break; //'থ' case (char)2470: engText.Append("d");/*engText.Append("do");*/break; //'দ' case (char)2471: engText.Append("dh"); break; //'ধ' case (char)2472: engText.Append("n");/*engText.Append("no");*/break; //'ন' case (char)2474: engText.Append("p");/*engText.Append("po");*/break; //'঩' case (char)2475: engText.Append("ph");/*engText.Append("f");*/break; //'প' case (char)2476: engText.Append("b");/*engText.Append("bo");*/break; //'ফ' case (char)2477: engText.Append("bh");/*engText.Append("BH");engText.Append("v");*/break; //'ব' case</s>
<s>(char)2478: engText.Append("m");/*engText.Append("mo");*/break; //'ভ' case (char)2479: engText.Append("z"); break; //'ম' case (char)2480: engText.Append("r");/*engText.Append("ro");*/break; //'য' case (char)2482: engText.Append("L");/*engText.Append("Lo");*/break; //'র' case (char)2486: engText.Append("sh");/*engText.Append("S");*/break; //'঱' case (char)2487: engText.Append("S");/*engText.Append("h");*/break; //'ল' case (char)2488: engText.Append("s");/*engText.Append("so");*/break; //'঳' case (char)2489: engText.Append("h");/*engText.Append("ho");*/break; //'঴' case (char)2494: engText.Append("a"); break; // a-kar case (char)2495: engText.Append("i"); break; // rossi-kar case (char)2496: engText.Append("I"); break; // dirghi-kar case (char)2497: engText.Append("u"); break; // rossu-kar case (char)2498: engText.Append("U"); break; // dighu-kar case (char)2503: engText.Append("e"); break; //a-kar case (char)2504: engText.Append("OI"); break; //oi-kar case (char)2507: engText.Append("O"); break; //o-kar case (char)2508: engText.Append(","); break; //oaau-kar case (char)2509: /*engText.Append("OU");*/break; //hosonta case (char)2510: engText.Append("t"); break; //khandata case (char)2524: engText.Append("R");/*engText.Append("Ro");*/break; //'ড়' case (char)2525: engText.Append("Rh"); break; //'ঢ়' case (char)2527: engText.Append("Y");/*engText.Append("Yo");*/break; //'৞' engText.Append("kkh");break; return engText.ToString(); #endregion #region Metaphone private string SearchMetaphone(string find, string[] names, List<string> matches) find = ConvertMetaphone(find); // Encode string we want to find Metaphone metaphone = new Metaphone(); string code = metaphone.Encode(find); // Search through the list of names foreach (string name in names) string metaphone_name = ConvertMetaphone(name); // Compare against soundex-encoded version of name if (metaphone.Encode(metaphone_name) == code) // Found a match--add it to list //matches.Add(metaphone_name); matches.Add(name); return code; private string ConvertMetaphone(string text) StringBuilder engText = new StringBuilder(); for (int index = 0; index < text.Length; index++) switch (text[index]) case (char)2437: engText.Append("o"); break; //'অ' case (char)2451: engText.Append("o"); break; //'঑' case (char)2438: engText.Append("a"); break; //'আ' case (char)2494: engText.Append("a"); break; // a-kar case (char)2439: engText.Append("i"); break; //'ই' case (char)2440: engText.Append("i"); break; //'ঈ' case (char)2495: engText.Append("i"); break; // rossi-kar case (char)2496: engText.Append("i"); break; // dirghi-kar case (char)2441: engText.Append("u"); break; //'উ' case (char)2442: engText.Append("u"); break; //'ঊ' case (char)2497: engText.Append("u"); break; // rossu-kar case (char)2498: engText.Append("u"); break; // dighu-kar case (char)2447: engText.Append("e"); break; //'এ' case (char)2503: engText.Append("e"); break; //a-kar case (char)2448: engText.Append("oi"); break; //'ঐ' case (char)2504: engText.Append("oi"); break; //oi-kar case (char)2452: engText.Append("ou"); break; //'঒' case (char)2508: engText.Append("ou"); break; //oaau-kar case (char)2453: engText.Append("k"); break; //'ও' case (char)2454: engText.Append("k"); break; //'ঔ' //case ' ': engText.Append("k"); break; case (char)2455: engText.Append("g"); break; //'ক' case (char)2456: engText.Append("g"); break; //'খ' case (char)2457: engText.Append("ng"); break; //'গ' case (char)2434: engText.Append("ng"); break; //onesh-kar case (char)2458: engText.Append("c"); break; //'ঘ' case (char)2459: engText.Append("c"); break; //'ঙ' case (char)2460: engText.Append("j"); break; //'চ' case (char)2461: engText.Append("j"); break; //'ছ' case (char)2479: engText.Append("j"); break; //'ম' case (char)4444: engText.Append("e"); break; //'ম'-phola case (char)2462: engText.Append("n"); break; //nio case (char)2463: engText.Append("T"); break; //'ট' case (char)2464: engText.Append("T"); break; //'ঠ' case (char)2465: engText.Append("D"); break; //'ড' case (char)2466: engText.Append("D"); break; //'ঢ' case (char)2443: engText.Append("ri"); break; //'ঋ' case (char)2480: engText.Append("r"); break; //'য' case (char)2524: engText.Append("r"); break; //'ড়' case (char)2525: engText.Append("r"); break; //'ঢ়' case (char)2472: engText.Append("n"); break; //'ন' case (char)2467: engText.Append("n"); break; //'ণ' case (char)2468: engText.Append("t"); break; //'ত' case (char)2469: engText.Append("t"); break; //'থ' case (char)2470: engText.Append("d"); break; //'দ' case (char)2471: engText.Append("d"); break; //'ধ' case (char)2474: engText.Append("p"); break; //'঩' case (char)2475: engText.Append("p"); break; //'প' case (char)2476: engText.Append("b"); break; //'ফ' case (char)2477: engText.Append("b"); break; //'ব' case (char)2478: engText.Append("m"); break; //'ভ' case (char)2527: engText.Append("y"); break; //'৞' case (char)2482: engText.Append("l"); break; //'র' case (char)2486: engText.Append("s"); break; //'঱' case (char)2487: engText.Append("s"); break; //'ল' case (char)2488: engText.Append("s"); break; //'঳' case (char)2489: engText.Append("h"); break; //'঴' case (char)58: engText.Append("h"); break; //bisarga case (char)2433: engText.Append("o"); break; //chandra-bindu //case (char)2507: engText.Append("O"); break; //o-kar</s>
<s>//case (char)2509: /*engText.Append("OU");*/break; //hosonta return engText.ToString(); #endregion #region Private Method private void ShowOutput(List<string> matches, string code, bool isShowMessageBox , System.Windows.Forms.RichTextBox rtb) if(isShowMessageBox) StringBuilder builder = new StringBuilder(); builder.Append("Searching for:\r\n"); builder.AppendFormat("{0} ({1})\r\n\r\n", txtFind.Text, code); if (matches.Count > 0) builder.AppendFormat("Matches found ({0}):\r\n", matches.Count); foreach (string match in matches) builder.AppendFormat("{0}\r\n", match); else builder.Append("No matches found"); MessageBox.Show(builder.ToString()); else StringBuilder builder = new StringBuilder(); builder.Append("Searching for:\r\n"); builder.AppendFormat("{0} ({1})\r\n\r\n", txtFind.Text, code); if (matches.Count > 0) builder.AppendFormat("Matches found ({0}):\r\n", matches.Count); foreach (string match in matches) builder.AppendFormat("{0}\r\n", match); else builder.Append("No matches found"); rtb.Text = builder.ToString(); #endregion 4.10 Proposed Technique: Fig 2: Proposed Techniques In the figure 2: The sound that we have got after encoding the inputted sound is added in the dictionary for searching. If the given sound is found in the dictionary, then it will be displayed as correct word. Otherwise, at first using Edit Distance Algorithm of our proposed system, the least distance words will be brought from the dictionary. Here, only those words will be brought whose Distance Value = 1. The words found using Edit Distance Algorithm at this stage will be implemented using Soundex Algorithm in future. The number of words will be reduced after the implementation of Soundex Algorithm. That means, the words which have less Edit Distance will be brought using Soundex Algorithm. Later on, the words found using Soundex Algorithm will be used in Metaphone Algorithm. After implementation of Metaphone Algorithm, we will get our targeted word. If the word will not be found, it will be shown as wrong word using error message. At last, the desired word will be found using our proposed technique. Otherwise, the words close to the desired word will be displayed / showed as suggestion. In this way, the user will get his desired word. If after using this system the desired word will not be found, it will be shown as wrong word to the user. Sample Result#1 Fig: 3 sample output for বর্ণ Figure #3 here user given word is বর্ণ . In our system firstly it goes to dictionary with the বর্ণ. In dictionary it বর্ণ and another word is বর্নন . And it show suggestion two word which your desired word for correction. Sample Result#2 Fig #4: Sample output ড ইলরক্ট In figure #4 user give word is ড ইলরক্ট which is direct found in dictionary and show your given word is correct. Sample Result#3 Fig 5: Sample output for ত োমোরর In figure #5 user write েত ম রর which misspelled add extra character. In our proposed system the input word firstly go to dictionary for matching. When it not found in dictionary it goes to Edit distance, soundex and Metaphone. It gives you the proper desired word like েত ম র. 4.11 Proposed technique code: public partial class FormSoundexMetaphone : Form public FormSoundexMetaphone() InitializeComponent(); textBoxDictionaryPath.ReadOnly = true; btnSearch.Enabled = false; #region Form Events private void Form1_Load(object sender, EventArgs e) cboAlgorithm.SelectedIndex = 0; private void btnSearch_Click(object sender, EventArgs e) // Get list of names to search string[] names</s>
<s>= txtNames.Text.Split(new char[] { '\r', '\n' }, StringSplitOptions.RemoveEmptyEntries); // List to hold matches List<string> matches = new List<string>(); // Call search method for the selected algorithm string code; if (cboAlgorithm.Text == "Soundex") code = SearchSoundex(txtFind.Text, names, matches); else // Metaphone code = SearchMetaphone(txtFind.Text, names, matches); #region Show result StringBuilder builder = new StringBuilder(); builder.Append("Searching for:\r\n"); builder.AppendFormat("{0} ({1})\r\n\r\n", txtFind.Text, code); if (matches.Count > 0) builder.AppendFormat("Matches found ({0}):\r\n", matches.Count); foreach (string match in matches) builder.AppendFormat("{0}\r\n", match); else builder.Append("No matches found"); MessageBox.Show(builder.ToString()); #endregion private void btnClose_Click(object sender, EventArgs e) Close(); private void buttonBrowse_Click(object sender, EventArgs e) textBoxDictionaryPath.ReadOnly = false; openFileDialog.InitialDirectory = System.IO.Path.GetFullPath(@"..\..\Dictionary"); openFileDialog.Title = "Browse Text Files"; openFileDialog.CheckFileExists = true; openFileDialog.CheckPathExists = true; openFileDialog.DefaultExt = "txt"; openFileDialog.Filter = "Text files (*.txt)|*.txt|All files (*.*)|*.*"; openFileDialog.FilterIndex = 2; openFileDialog.RestoreDirectory = true; openFileDialog.ReadOnlyChecked = true; openFileDialog.ShowReadOnly = true; If (openFileDialog.ShowDialog() == DialogResult.OK) textBoxDictionaryPath.Text = openFileDialog.FileName; textBoxDictionaryPath.ReadOnly = true; String dictionary = File.ReadAllText(textBoxDictionaryPath.Text); List<string> wordList = dictionary.Split('\n', ' ').ToList(); string[] s = wordList.Select(w=> w.Any(x => !char.IsLetter(x)) ? w.Substring(0, w.IndexOf("/")==-1? w.Length: w.IndexOf("/")) : w).ToArray(); txtNames.Lines = s; btnSearch.Enabled = true; #endregion #region Soundex private string SearchSoundex(string find, string[] names, List<string> matches) find = ConvertSoundex(find); // Encode string we want to find string code = Soundex.Encode(find); // Search through the list of names foreach (string name in names) string soundex_name = ConvertSoundex(name); // Compare against soundex-encoded version of name if (Soundex.Encode(soundex_name) == code) // Found a match--add it to list matches.Add(soundex_name); return code; private string ConvertSoundex(string text) StringBuilder engText = new StringBuilder(); for(int index=0; index<text.Length; index++) switch(text[index]) return engText.ToString(); #endregion #region Metaphone private string SearchMetaphone(string find, string[] names, List<string> matches) find = ConvertMetaphone(find); // Encode string we want to find Metaphone metaphone = new Metaphone(); string code = metaphone.Encode(find); // Search through the list of names foreach (string name in names) string metaphone_name = ConvertMetaphone(name); // Compare against soundex-encoded version of name if (metaphone.Encode(metaphone_name) == code) // Found a match--add it to list matches.Add(metaphone_name); return code; private string ConvertMetaphone(string text) StringBuilder engText = new StringBuilder(); for (int index = 0; index < text.Length; index++) switch (text[index]) return engText.ToString(); #endregion CHAPTER V: CONCLUSION We have improved Bangla spelling checking, transliteration and name searching application using Edit Distance, Sondex, Metaphone phonetic encoding. The summary regarding the improvements of our new system is as under:  It can be used to develop a spelling checker that can provide the words of same pronunciation in suggestion.  It can also be used to develop a transliteration that can be used not only a one to one direct mapping but also be able to give words with same pronunciation from dictionary. It can be used to develop a name searching application as well in which similar sounding names can be easily found in dictionary and ranked in the suggestion. Future research We will try to upgrade the system which will be able to convert the inputted voice into text and as per that text, it will find the related words in the dictonary. If the word exactly matched, it will be displayed directly. Otherwise,</s>
<s>it will show suggestion for same type of words. This is how the system will be more effective in future. REFERENCES i. https://people.cs.pitt.edu/~kirk/cs1501/Pruhs/Spring2006/assignments/editdistance/Levenshtein%20Distance.htm ii. http://creativyst.com/Doc/Articles/SoundEx1/SoundEx1.htm iii. https://www.codeproject.com/Articles/162790/Fuzzy-String-Matching-with-Edit-Distance iv. https://nlp.stanford.edu/IR-book/html/htmledition/edit-distance-1.html v. https://nickgrattan.wordpress.com/2014/06/21/levenshtein-minimum-edit-distance-in-c/ vi. https://en.wikibooks.org/wiki/Algorithm_Implementation/Strings/Levenshtein_distance\ vii. https://en.wikipedia.org/wiki/Metaphone [1] Definition of phonetic encoding available online at http://www.nist.gov/dads/HTML/phoneticEncoding.html. [2] P Lekho, available online at http://lekho.sourceforge.net/. [3] The Soundex Algorithm, available online at http://www.archives.gov/research_room/genealogy/census/soundex.html. [4] Lawrence Phillips, “Hanging on the Metaphone”, Computer Language, 7(12), 1990. [5] Lawrence Philip’s Metaphone Algorithm, available online at http://aspell.sourceforge.net/metaphone/index.html</s>
<s>A Hybrid Approach for Transliterated Word-Level LanguageIdentification: CRF with Post Processing HeuristicsSomnath BanerjeeCSE Department,JU,Indias.banerjee1980@gmail.comAniruddha RoyCSE Department,JU,Indiaaniruddha@gmail.comAlapan KuilaCSE Department,JU,Indiaalapan.cse@gmail.comSudip Kumar NaskarCSE Department,JU,Indiasudip.naskar@cse.jdvu.ac.inSivaji BandyopadhyayCSE Department,JU,Indiasivaji_cse@yahoo.comPaolo RossoNLE Lab,UPV,Spainprosso@dsic.upv.esABSTRACTIn this paper, we describe a hybrid approach for word-levellanguage (WLL) identification of Bangla words written inRoman script and mixed with English words as part of ourparticipation in the shared task on transliterated search atForum for Information Retrieval Evaluation (FIRE) in 2014.A CRF based machine learning model and post-processingheuristics are employed for the WLL identification task. Inaddition to language identification, two transliteration sys-tems were built to transliterate detected Bangla words writ-ten in Roman script into native Bangla script. The systemdemonstrated an overall token level language identificationaccuracy of 0.905. The token level Bangla and English lan-guage identification F-scores are 0.899, 0.920 respectively.The two transliteration systems achieved accuracies of 0.062and 0.037. The system presented in this paper resulted inthe best scores across almost all metrics among all the par-ticipating systems for the Bangla-English language pair.Categories and Subject Descriptors1.2.7 [Artificial Intelligence]: Natural Language Pro-cessing, Language parsing and understandingGeneral TermsExperimentation, LanguagesKeywordsWord level language identification, Transliteration1. INTRODUCTIONIn spite of having indigenous scripts, often Indian lan-guages (e.g., Bangla, Hindi, Tamil etc.) are written in Ro-man script for user generated contents (such as blogs andtweets) due to various socio-cultural and technological rea-sons. This process of phonetically representing the words ofPermission to make digital or hard copies of all or part of this work forpersonal or classroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. To copy otherwise, torepublish, to post on servers or to redistribute to lists, requires prior specificpermission and/or a fee.WOODSTOCK ’97 El Paso, Texas USACopyright 20XX ACM X-XXXXX-XX-X/XX/XX ...$15.00.a language in a nonnative script is called (forward) translit-eration. Especially the use of Roman script in translitera-tion for those languages presents serious challenges to un-derstanding, search and (backward) transliteration. Thesechallenges include handling spelling variations, diphthongs,doubled letters, reoccurring constructions, etc.Language identification for documents is a well-studiednatural language problem [3]. King and Abney[9] presentedthe different aspects of this problem and focussed on theproblem of labeling the language of individual words withina multilingual document. They proposed language identifi-cation at the word level in mixed language documents in-stead of sentence level identification.The last decade has seen the development of transliter-ation systems for Asian languages. Some notable translit-eration systems were built for Chinese [14], Japanese [7],Korean [8], Arabic [1], etc. Transliteration systems werealso developed for Indian languages [6, 16].2. TASK DEFINITIONA query q : < w1w2w3...wn > is written in Roman script.The words, w1, w2, w3, ..., wn, could be standard Englishwords or transliterated from Indian languages (IL), e.g.,Bangla, Hindi, etc. The objective of the task is to iden-tify the words as English or IL depending on whether it isa standard English word or a transliterated IL word. Afterlabeling the words, for each transliterated word, the correcttransliteration has to be provided in the native script (i.e.,the script which is used for writing the IL).</s>
<s>Names of peo-ple and places in IL should be considered as transliteratedentries, whenever it is a native name. Thus, the systemhas to transliterate the identified native names (e.g. Arund-hati Roy). Non-native names (e.g. Ruskin Bond) should beskipped during labeling and are not evaluated.3. DATASETS AND RESOURCESThis section describes the dataset that have been used inthis work. The training and the test data have been con-structed by using manual and automated techniques andmade available to the task participants by the organizers .The training dataset consists of 800 lines.The testset con-tains 1000 sentences.The following resources provided by the organizers werealso employed:• English word frequency list1: contains standard dictio-nary words along with their frequencies computed from alarge corpus constructed from news corpora.• Bangla word frequency list2: contains Bangla words inRoman script along with their frequencies computed fromthe Anandabazar Patrika news corpus.• Bangla word transliteration pairs dataset [15]: containsBangla-English transliteration pairs collected from differentusers in multiple setups - chat, dictation and other scenarios.4. SYSTEM DESCRIPTIONWe divided the overall task into two sub-problems: (a)word-level language (WLL) classification, and (b) translit-eration of identified IL words into native script.4.1 WLL classification Features4.1.1 Character n-gramsFew studies [9, 5] successfully used the character n-gramfeature and they obtained reasonable results. Therefore, fol-lowing them, we also used this feature from character uni-grams up to five-grams. After empirical study on the devel-opment set, we decided on the maximum length of a wordto be 10 for generating the character n-grams. Therefore, ifthe length of the word is more than 10, then due to the fixedlength vector constraint the system generates 10 unigramsand the last two characters are skipped. Thus the systemalways generates a total of 40 n-grams, i.e., 10 unigrams,9 bigrams, 8 trigrams, 7 four-grams and 6 five-grams. Theentire word is also considered as a feature.4.1.2 Symbol characterA word might start with some symbol, e.g. #, @, etc. Ithas also been observed from the training corpus that symbolsappear within the word itself, e.g. a***a, kankra-r, etc.Sometimes the entire word is built up of a symbol, e.g. “, ?.has symbol(word) =1 if word contains any symbol0 otherwise4.1.3 LinksThis feature is used as a binary feature. If a word is alink, then it is set to 1, otherwise it is set to 0.is link(word) =1 if word is a link0 otherwise4.1.4 Presence of DigitThe use of digit(s) in a word sometimes means differentin the chat dialogue. For example, ‘gr8’ means ‘great’, ‘2’could mean ‘to’ or ‘too’. This feature is also used as binaryfeature. Therefore,has digit(word) =1 if word contains any digit0 otherwise4.1.5 Word suffixAny language dependent feature increases the accuracy ofthe system for a particular language. [2] successfully used1http://cse.iitkgp.ac.in/resgrp/cnerg/qa/fire13translit/English%20-%20Word%20frequencies.txt2http://cse.iitkgp.ac.in/resgrp/cnerg/qa/fire13translit/Bangla-Word%20frequencies.txtthe fixed length suffix feature in the Bangla named entityrecognition task. To include this feature, we have prepareda small suffix-list (10 entries) under human supervision fromthe archive (10 documents) of an online Bangla newspaper.This feature is also used as a binary feature.has suffix(word) =1 if word contains any suffix0 otherwise4.1.6 Contextual ProbabilityThis feature is very much crucial to resolve the ambiguityin the WLL identification problem. Let us</s>
<s>consider examplesgiven below.• Mama take this badge off of me.• Ami take boli je ami bansdronir kichu agei thaki.The word ‘take’ exists in the English vocabulary. How-ever, the backward transliteration of ‘take’ is a valid Banglaword. Words like ‘take’, ‘are’, ‘pore’, ‘bad’ are truly ambigu-ous words with respect to the WLL identification problemas they are valid English words as well as backward translit-erations of valid Bangla words. In this regard, context of theword can be used to correctly identify the language for suchan ambiguous word. Therefore, we have considered this veryuseful feature.As in the Bangla-English language identification task thelabel should be one from the tag-list: {English, Hindi, Bangla,Others}, we calculate the probability of the previous wordbeing English, Hindi, Bangla and Others. Thus, four prob-abilities have been calculated for the previous word. In asimilar way, the labeling probabilities for the next word havealso been calculated.The system calculates the respective probabilities asPtag(W ) =Ftag(W )F (W ), where, tag is any one from the list: {E,O, H, B}; Ftag(W ) = frequency of the word W belonging totag ; F (W ) = Frequency of word W. These frequencies arecounted from the training corpus. However, for few words inthe testset the respective probabilities are 0. Since we do notwant assign zero probability to those words, we need to as-sign some probability mass to those words using smoothing.We use the simplest smoothing technique, Laplace smooth-ing, which adapts the empirical counts by adding a fixednumber (say, 1) to every count and thus eliminates counts ofzero. For simplicity, we use add-one smoothing. Therefore,the adjusted formula is: Ptag(W ) =Ftag(W ) + 1F (W ) + N, where, N= total number of words in the training corpus.4.2 WLL ClassifierIn this work, Conditional Random Field (CRF) is used tobuild the model for WLL identification classifier. We usedCRF++ toolkit3 which is a simple, customizable, and opensource implementation of CRF.4.3 Post ProcessingAfter CRF classifier labels each word, post-processing heuris-tics are applied to make a rule-based decision over the out-come of the classifier. The following heuristics are employed:Rule-1: Many English words end with ‘ed’ (e.g. decided,reached, arrested, looked, etc.), but we have not found anyoccurrences of any Bangla word ending with that suffix in3http://crfpp.googlecode.com/svn/trunk/doc/index.htmlthe given corpus. Therefore, an word ending with ‘ed’ andhaving no symbol inside it is tagged as an English word. Inthe test corpus we found 306 such occurrences.R1: H-Tag(w)=E ; if C-Tag(w)= B or O, has suffix(w, ‘ed’)=true and w 6∈ SWhere, C-Tag(w)=Classifier’s output, H-Tag(w)=Heuristicbased output, has suffix(w, s)= word ends with suffix s, andS = set of special character , E = English tag, B = Banglatag, O = Others tag.Rule-2: An English word may end with ‘ly’ suffix also,e.g. thoughtfully, anxiously, unfriendly, etc. It has been ob-served in the test dataset that few English words were notwritten in correct spelling and they were mis-classified asBangla words, e.g. lvly, xactly, physicaly, etc. These wordsare corrected by applying this rule.R2: H-Tag(w)=E ; if C-Tag(w)= B or O, has suffix(w, ‘ly’)=true and w 6∈ SRule-3: It was also</s>
<s>observed that unlike English words (e.g.evening, kissing, playing, etc.) no Bangla words end with‘ing’ suffix in the training corpus. We found 316 such oc-currences in testset, but some occurrences are not tagged asEnglish because those words start with ‘#’ (e.g. #engineer-ing). This rule was able to correct some spelling errors suchas luking, nthing, njoying, etc.R3: H-Tag(w)=E ; if C-Tag(w)= B or O, has suffix(w, ‘ing’)=true and w 6∈ SRule-4: The use of apostrophe s (i.e.,’s) is very commonin English words, e.g. women’s, uncle’s etc. In the testdataset, we found 73 use of it.R4: H-Tag(w)=E ; if C-Tag(w)= B or O, has suffix(w, ‘’s’)=true and w 6∈ SRule-5: Another very common use of apostrophe is apos-trophe t (i.e., ’t), e.g., don’t, isn’t, wouldn’t, etc. Even it isused in different way such as rn’t, cudn’t, etc.R5: H-Tag(w)=E ; if C-Tag(w)= B or O, has suffix(w, ‘’t’)=true and w 6∈ SRule-6: A few users prefer to use words ending with ’ll, e.g.,I’ll, It’ll, he’ll, you’ll, etc. We found 20 such occurrences inthe test set.R6: H-Tag(w)=E ; if C-Tag(w)= B or O, has suffix(w, ‘’ll’)=true and w 6∈ SRule-7: The use of words like o’clock, O’Keefe etc. are veryuncommon in Bangla social media users. But we found 16such occurrences in the test dataset.R7: H-Tag(w)=E ;if C-Tag(w)= B or O, starts with(w,‘o”)= true and w 6∈ SRule-8: This rule is very much straightforward. If a wordcontains a special symbol, then the word is tagged as O.R8: H-Tag(w)=O ; if C-Tag(w)=B or O or E or H and w∈ SRule-9: Although a few ambiguities are discussed in 4.1.6,there is a high chance of a word being English if it is in theEnglish dictionary. Considering the ambiguity, we also con-sider the probability of the word to be in Bangla language.R9: H-Tag(w)=E ; if C-Tag(w)=B and probability Bangla(w)< 0.08 (this threshold was set empirically.)Rule-10: The use of character repetition in the word is ob-served not only in English and Hindi, but in Bangla as well.The following observations have been noticed:(1) Repetition of a character more than twice at the endof a word has the higher chance of the word being an En-glish/Hindi word than Bangla. E.g. torengeee, plzzzzzz, etc.(2) Repetition of a character more than twice in the mid-dle of a word has the higher chance of the word being aBangla word than English. E.g. kisssob, oneeek, etc.(3) If a word satisfies both condition (1) and (2), then theword is more likely to be an English word. E.g. muuuuaaah-hhhhhhh.The following rules are employed:Case-1: R10a: H-Tag(w) = E ; if C-Tag(w) = B or O orH, end repeat(ch) >= 3 and w 6∈ SCase-2: R10b: H-Tag(w) = B ; if C-Tag(w) = E or O orH, middle repeat(ch) >= 3 and w 6∈ SCase-3: R10c: H-Tag(w) = E ; if C-Tag(w) = B or H orO, end repeat(ch) >= 3 and middle repeat(ch) >= 3 and w6∈ SRule-11: This rule is also very much straightforward. Ifa word contains any substring from the list: {www., http:,https::}, then the word is</s>
<s>tagged as Others.R11: H-Tag(w) = O ; if C-Tag(w) = B or E or H, andcontains(w) = www.|http:|https::5. TRANSLITERATION SYSTEMFor transliterating the detected Romanized Bangla words,we built our transliteration system based on the state-of-the-art phrase-based statistical machine translation (PB-SMT)model [13] using the Moses toolkit [12]. PB-SMT is a ma-chine translation model; therefore, we adapted the PB-SMTmodel to the transliteration task by translating charactersrather than words as in character-level translation. For char-acter alignment, we used GIZA++ implementation of theIBM word alignment model [4]. To suit the PB-SMT modelto the transliteration task, we do not use the phrase reorder-ing model. The target language model is built on the targetside of the parallel data with Kneser-Ney smoothing [10] us-ing the SRILM tool [11]. The PB-SMT model was trainedon the English-Bangla word transliteration pairs dataset [15]provided by the task organizers. In a bid to simulate syllablelevel transliteration we also built a transliteration model bybreaking the English and Bangla words to chunks of consec-utive characters and trained the transliteration system onthis chunked data. The chunk-level transliteration system issupposed to perform better than the character-level translit-eration system since a chunk contains more context than acharacter. While decoding, we first apply the chunk-leveltransliteration system on the detected Bangla words. If thechunk-level transliteration system is able to transliterate aword only partially (i.e., it still contains roman characters),the untranslated parts are decoded using the character-leveltransliteration system. For breaking the English and Ben-gali words into chunks, we take two approaches. In the firstapproach (Run-1) we simply break words into chunks of con-secutive 2/3 characters. In the other approach (Run-2), webreak words into transliteration units (TU) following theheuristic used in [6]. The TU-level transliteration systemwas trained on named entities.6. RESULTSTable-1 presents the obtained results. Our system achievedan overall accuracy of 0.905 for the language labeling taskwhich is the best among the participating teams.Table 1: ResultsToken level language accuracyLanguage Precision Recall F-MeasureBangla 0.866 0.935 0.899English 0.944 0.899 0.920Token level TransliterationRun Precision Recall F-MeasureRun-1 0.033 0.572 0.062Run-2 0.019 0.338 0.037Other Performance MetricsEQMF All(No Translit.) 0.444EQMF without NE(No Translit.) 0.548EQMF without MIX(No Translit.) 0.444EQMF without NE&MIX(No Translit.) 0.548EQMF All Run-1 0.005EQMF All Run-2 0.004EQMF without NE: Run-1 0.007EQMF without NE: Run-2 0.004EQMF without MIX: Run-1 0.005EQMF without MIX: Run-2 0.004EQMF without NE&MIX: Run-1 0.007EQMF without NE&MIX: Run-2 0.004ETPM: Run-1 227/364ETPM: Run-2 134/364Language Identification Accuracy 0.9056.1 Error AnalysisIt was observed that the WLL classifier based on CRFwrongly predicted due to the small training data. Moreover,some words were predicted correctly by the classifier, how-ever, due to the heuristics the final prediction went wrong;e.g., the word Wannna is re-classified by (R10b) wronglyas Bangla. R10a also mis-classified Hindi words havingcharacter repetition at the end, such as torengeee, Arehhh,etc. R10a also mis-classified Bangla words such as jahhh,jetooooo, etc. Rule-8 re-classified some words due to tok-enization errors in the provided test dataset am!”, back!”,goin’, ekjon-eri, etc. Some words in the testset were of theform word1/word2, such as isharay/nirupay, samanyo/8Betc., which were simply classified as O (i.e., Others) usingRule-8 in our system.The TU-level transliteration system was trained over namedentities; hence it</s>
<s>performed well for NEs, but the overallperformance was affected because majority of the detectedBangla words were non-NE words.7. CONCLUSIONSIn this paper, we presented a brief overview of our hybridapproach to address the automatic WLL identification prob-lem. We found that the use of simple post-processing heuris-tics enhances the overall performance of the WLL system.Two variants of the transliteration systems were developedbased on the segmentation of the transliteration data, i.e., atchunk-level and syllable-level. As future work we would liketo explore more features for the machine learning model andbetter post-processing heuristics for the WLL identificationtask and try to increase the efficiency of our transliterationsystem.8. ACKNOWLEDGMENTSWe acknowledge the support of the Department of Elec-tronics and Information Technology (DeitY), Government ofIndia, through the project “CLIA System Phase II”.9. REFERENCES[1] Y. Al-Onaizan and K. Knight. Named entitytranslation: Extended abstract. In HLT, pages122–124. Singapore, 2002.[2] S. Banerjee, S. Naskar, and S. Bandyopadhyay.Bengali named entity recognition using margin infusedrelaxed algorithm. In Text, Speech and Dialogue, pages125–132. Springer International Publishing, 2006.[3] K. R. Beesley. Language identifier: A computerprogram for automatic natural-language identificationof on-line text. In American Translators Association,page 54, 1988.[4] P. F. Brown, S. A. D. Pietra, V. J. D. Pietra, andR. L. Mercer. Mercer: The mathematics of statisticalmachine translation: parameter estimation. InComputational Linguistics, pages 263–311, 1993.[5] G. Chittaranjan, Y. Vyas, K. Bali, andM. Choudhury. Word-level language identificationusing crf: Code-switching shared task report of msrindia system. In EMNLP, page 73, 2014.[6] A. Ekbal, S. Naskar, and S. Bandyopadhyay. Amodified joint source channel model for transliteration.In COLING-ACL, pages 191–198. Australia, 2006.[7] I. Goto, N. Kato, N. Uratani, and T. Ehara.Transliteration considering context information basedon the maximum entropy method. In MT-Summit IX,pages 125–132. New Orleans, USA, 2003.[8] S. Y. Jung, S. L. Hong, and E. Paek. An english tokorean transliteration model of extended markovwindow. In COLING, pages 383–389, 2000.[9] B. King and S. Abney. Labeling the languages of wordsin mixed-language documents using weakly supervisedmethods. In NAACL-HLT, pages 1110–1119, 2013.[10] R. Kneser and H. Ney. Improved backing-off form-gram language modeling. In ICASSP, pages181–184. Detroit, MI, 1995.[11] R. Kneser and H. Ney. Srilm-an extensible languagemodeling toolkit. In Intl. Conf. on Spoken LanguageProcessing, pages 901–904, 2002.[12] P. Koehn, H. Hoang, A. Birch, C. Callison-Burch,M. Federico, N. Bertoldi, B. Cowan, W. Shen,C. Moran, R. Zens, C. Dyer, O. Bojar, A. Constantin,and E. Herbst. Moses: open source toolkit forstatistical machine translation. In ACL, 2007.[13] P. Koehn, F. J. Och, and D. Marcu. Statisticalphrase-based translation. In HLT-NAACL, 2003.[14] H. Li, Z. Min, and J. Su. A joint source-channel modelfor machine transliteration. In ACL, 2004.[15] V. Sowmya, M. Choudhury, K. Bali, T. Dasgupta, andA. Basu. Resource creation for training and testing oftransliteration systems for indian languages. In LREC,2010.[16] H. Surana and A. K. Singh. A more discerning andadaptable multilingual transliteration mechanism forindian languages. In COLING-ACL, pages 64–71.India, 2008.</s>
<s>J Psycholinguist ResDOI 10.1007/s10936-014-9302-xComputational Modeling of Morphological Effectsin Bangla Visual Word RecognitionTirthankar Dasgupta · Manjira Sinha · Anupam Basu© Springer Science+Business Media New York 2014Abstract In this paper we aim to model the organization and processing of Bangla poly-morphemic words in the mental lexicon. Our objective is to determine whether the mentallexicon accesses a polymorphemic word as a whole or decomposes the word into its con-stituent morphemes and then recognize them accordingly. To address this issue, we adoptedtwo different strategies. First, we conduct a masked priming experiment over native speak-ers. Analysis of reaction time (RT) and error rates indicates that in general, morphologicallyderived words are accessed via decomposition process. Next, based on the collected RT datawe have developed a computational model that can explain the processing phenomena ofthe access and representation of Bangla derivationally suffixed words. In order to do so, wefirst explored the individual roles of different linguistic features of a Bangla morphologi-cally complex word and observed that processing of Bangla morphologically complex wordsdepends upon several factors like, the base and surface word frequency, suffix type/tokenratio, suffix family size and suffix productivity. Accordingly, we have proposed differentfeature models. Finally, we combine these feature models together and came up with a newmodel that takes the advantage of the individual feature models and successfully explain theprocessing phenomena of most of the Bangla morphologically derived words. Our proposedmodel shows an accuracy of around 80 % which outperforms the other related frequencymodels.Keywords Mental lexicon · Morphological decomposition · Masked priming ·Visual word recognition · Frequency effects · Suffix productivityIntroductionThe term mental lexicon refers to the access, representation and processing of the wordsin the human mind and the various associations between them that help fast retrieval andT. Dasgupta (B) · M. Sinha · A. BasuDepartment of Computer Science and Engineering,Indian Institute of Technology, Kharagpur 721302, West Bengal, Indiae-mail: iamtirthankar@gmail.com123J Psycholinguist Rescomprehension of the words in a given context (Aitchison 2005; Marslen-Wilson et al. 1994;Taft and Forster 1975). Words are known to be associated with each other at various levels oflinguistic structures namely, orthography, phonology, morphology and semantics. However,the precise nature of these relations and their interactions are unknown. Understanding theorganization of the mental lexicon is one of the important goals of cognitive science. A clearunderstanding of the structure and the processing mechanism of the mental lexicon will furtherour knowledge of how the human brain processes language. Further, these linguisticallyimportant and interesting questions are also highly significant for computational linguistics(CL) and natural language processing (NLP) applications. Their computational significancearises from the issue of their storage in lexical resources like WordNet (Fellbaum 2010) andraises important questions like, how to store morphologically complex words, in a lexicalresource like WordNet keeping in mind the storage and access efficiency.One of the key issues on which psycholinguists have been investigating for a long time isthe representation and processing of morphologically complex words in the mental lexicon.That is to say, whether for a native speaker, a polymorphemic word like “unpreventable”will be processed as a whole or will it be decomposed into</s>
<s>its individual morphemes “un-”,“prevent”, and “-able” and finally recognized by the representation of its stem (morphemicmodel). It has been argued that people certainly have the capability of such decompositionsince they can understand novel words like “unsupportable”. However, there has been a longstanding debate whether such decompositions are obligatory (i.e morphemic) or are theyapplicable to only those situations where the whole word access fails (Taft 2004) (partialdecomposition model). An alternative to the morphemic and partial decomposition model isthe full listing model that assumes decomposition is not at all involved and initial processingof words are performed in terms of the whole word representation in the mental lexicon(Burani and Caramazza 1987; Burani and Laudanna 1992; Caramazza et al. 1988). Suchissues are typically addressed by designing appropriate priming experiments (Frost et al.1997; Aitchison 2005) or other lexical decision tasks.Priming results in faster recognition of a stimulus (called the, target) based on the previousexposure of another stimulus (called the prime). Therefore, if the prime and the target wordsare morphologically related (say, MANLY and MAN), then going by the decompositionmodel, as soon as the prime (MANLY) is presented to a subject, it will be decomposedinto its constituent stem (MAN) and the suffix (-LY) and be recognized individually. Thus,recognition of the target word starts well before it is presented to the subject. Naturally, thiswill result in a faster recognition of the target as compared to the case when the target ispreceded by a morphologically unrelated word (say, MOTHER and MAN) where no suchdecomposition of the prime is possible. On the other hand, considering full-listing model,recognition of the target must be independent of the prime. Thus, time to recognize the targetMAN preceded by the prime MANLY must be equal to the case when it is preceded byMOTHER. Hence, if priming by a morphologically related word results in faster recognitionof the target, it may be assumed that decomposition has played its role.The priming experiments can be classified according to the mode of representing theprime and target words: (a) when both are visually presented (Bentin and Feldman 1990;Ambati et al. 2009; Frost et al. 1997; Marslen-Wilson et al. 2008), (b) primes are auditorilypresented but the targets are visually presented (Marslen-Wilson et al. 1994; Marslen-Wilsonand Tyler 1997; Marslen-Wilson and Zhou 1999; Marslen-Wilson et al. 2008), (c) targetsare auditorily presented but the primes are visually presented (Marslen-Wilson et al. 1994).These experiments demonstrate that across the languages, recognition of a target word (sayhappy) is facilitated by a prior exposure of a morphologically related prime word (e.g.,happiness). Since morphological relatedness often implies orthographic, phonological and123J Psycholinguist Ressemantic similarities between two words, several attempts have been made to factor outother priming effects from morphological priming (Bentin and Feldman 1990; Drews andZwitserlood 1995).The masked priming paradigm, where the prime word is placed in between a forward maskand a target word such that it cannot be consciously perceived (Bodner and Masson 1997;Davis and Rastle 2010) also shows some interesting ways of examining morphological effectsin word recognition (Forster and Davis 1984). Through such experiments morphologicalpriming effects are shown</s>
<s>to exist in the absence of semantic priming for Hebrew (Frost etal. 1997), phonological priming (Crepaldi et al. 2010), and orthographic priming for French(Grainger et al. 1991) and Dutch (Drews and Zwitserlood 1995).A cross modal priming experiment has been conducted for Bangla derivationally suf-fixes words by Dasgupta et al. (2010) where strong priming effects have been observed formorphologically and phonologically related prime-target pairs; weak priming is observedfor morphologically related but phonologically opaque pairs and no priming is observed formorphologically unrelated pairs. Apart from this, we do not know of any other cognitiveexperiments on morphological priming in Bangla or other Indian languages.Several attempts have been made to provide computational models that can predict theprocessing of a given polymorphemic words. The obligatory decomposition model (Taft2004) accounts for the fact that, decomposition of a polymorphemic words depend upon thefrequency of the constituent stem (or the base word). Therefore, higher the stem frequency,easier it is to decompose. On the other hand, the full listing model (Burani and Laudanna1992) states that the access to a polymorphemic word depends upon the frequency of thewhole word. Thus, higher is the surface frequency of a word is, the easier it is to be recognized.The dual route access model (Baayen et al. 1997) argues that whether or not a polymorphemicword will be decomposed into its constituent morpheme, depends upon the surface frequencyof that word; that is, if the frequency of a polymorphemic word crosses a threshold then theword will be accessed as a whole otherwise it will be accessed via its parts.Experiments on English inflected words (Taft and Forster 1975), argued that lexical deci-sion responses of polymorphemic words depends upon the base word frequency. In otherwords, if recognition of a polymorphemic word always takes place through decomposition,then higher the frequency of the stem is (called, base frequency), the shorter is the time torecognize the word (called, Reaction Time or RT). Previous experiments have shown suchbase frequency effects in most of the cases but not for all (Baayen et al. 1997; Bertram etal. 2000; Bradley 1980; Burani and Caramazza 1987; Burani et al. 1984; Colé et al. 1989;Schreuder and Baayen 1997; Taft and Forster 1975; Taft 2004).Later, the dual processing race model (Baayen 2000) was proposed where both the full-listing and morphemic path compete among each other and depending upon the frequencyof base and the surface word any one of the paths are chosen. The model proposes a specificmorphologically complex form is accessed via its parts if the frequency of that word is abovea certain threshold of frequency, then the direct route will win, and the word will be accessedas a whole. If it is below that same threshold of frequency, the parsing route will win, and theword will be accessed via its parts. However, what the dual processing model fails to explainis whether the stem frequency of a derived word is also involved during the recognitionprocess.The obligatory decomposition (morphemic) model has been proposed by Taft (2004)for inflectional suffixed English words and showed that stem frequency of a word plays</s>
<s>animportant role during decomposition of a derived word. Further, it argued that access to apolymorphemic word always takes place via two phases, a) decomposition and b) recom-bination. Therefore, during recognition, any polymorphemic word will be first decomposed123J Psycholinguist Resinto its constituent morphemes where the morphemes will be individually recognized andthen in the combination phase they will be recombined together to recognize the whole word.The effect of morphological family size was observed by Schreuder and Baayen (1997).It has been shown that the response latencies of morphologically complex words in Englishsignificantly depend on the morphological family size of the word in concerned. Similarobservations have been made in the works of Baayen et al. (2006), Pylkkänen et al. (2004),Jong et al. (2000), Carlisle and Katz (2006), Bertram et al. (2000), Schreuder and Baayen(1997). Closer to the present scope, by Prado et al. (2005) models the paradigmatic structureof a morphologically complex word. The work describes a distributed connectionist modelof visual word recognition that explores how the paradigmatic effect can describe the lexicaldecision tasks of complex words. Milin et al. (2009) also studied the paradigmatic effect ofmorphologically complex words through information theoretic approach. Here, thereactiontime of a complex word has been modeled based on the entropy of that word. Ford et al. (2010)in their work analyzed the role of stem and suffix family size. They observed both the stemas well as the suffix family sizes plays important role in the recognition of morphologicallycomplex words.In spite of the plethora of work that has been done to understand the representationand processing of polymorphemic words in the mental lexicon, a coherent picture is yet tobe emerged. Further, most of the studies reported so far conducted experiments mainly inEnglish, Hebrew, Italian, French, Dutch, and few other languages (Frost et al. 1997; Forsterand Davis 1984; Grainger et al. 1991; Drews and Zwitserlood 1995; Taft and Forster 1975;Taft 2004). However, we do not know of any such investigations for Indian languages, whichare considered to be morphologically richer than many of their Indo-European cousins. Onthe other hand, several cross-linguistic experiments indicate that mental representation andprocessing of polymorphemic words are language dependent (Taft 2004). Therefore, thefindings from experiments in one language cannot be generalized to all languages. Hence,it is important to conduct similar experimentations in other languages. Bangla, in particular,supports stacking of inflectional suffixes, a rich derivational morphology inherited fromSanskrit and some borrowed from Persian and English, and an abundance of compounding,as well as mild agglutination.Accordingly, the objective of this paper is to present computational models that can beused to understand the organization and processing of Bangla derivationally suffixed poly-morphemic words in the mental lexicon. Our aim is to determine whether the mental lexiconprocesses Bangla morphologically complex words in terms of full-listing, morphemic orpartial decomposition model. For this, we first conducted the masked priming experimentover a set of 500 Bangla morphologically complex words and collected reaction time datafrom 28 subjects. The experimental result shows that priming behavior is observed only forthose cases where the prime is the derived form of the target and</s>
<s>having a recognizablesuffix (like, sonAli–sonA (GOLDEN–GOLD), and bayaska–bayasa (AGED–AGE). Weakpriming is observed for cases where the prime is a derived form of the target but do not havea recognizable suffix (like, sabhAba (HABIT)–sbAbhAbika (NATURAL))or when the primeand the target are not morphologically related at all but have a recognizable suffix (like,AmadAni (IMPORT)–Ama (MANGO)). These observations initially indicate the obligatorydecomposition model proposed by Taft and Forster (1975), Taft (2004) that assumes poly-morphemic words to be processed via decomposition. Further analysis of the RTs obtained inthe experiments indicate that processing of Bangla polymorphemic words may be achievedby the dual route decomposition model as proposed by Baayen (2000). However, contraryto the idea of considering base and/or surface frequency as sole predictor of processingBangla polymorphemic words in the mental lexicon, we have explored the individual roles of123J Psycholinguist Resdifferent features of a morphologically complex words like, the relative frequency betweenthe base and surface word, type-token ratios, and role of suffixes (their family size, type-tokenratio, and productivity) in morphological decomposability. Accordingly we have proposeddifferent models. We have evaluated the proposed models with the results obtained from thepriming experiment. Finally, we combine the role of all these characteristics and develop amore robust computational model that can predict the organization and processing of Banglamorphologically complex words. We have evaluated our proposed model with derivationallysuffixed Bangla words and found that the performance of our proposed model outperformsthe performance of the existing ones.The rest of the paper is organized as follows: section “Psycholinguistic Study of BanglaPolymorphemic Words through Masked Priming Experiments” presents related works; sec-tion “Applying Frequency Models to Bangla Polymorphemic Words” describes the maskedpriming experiment performed over a set of Bangla morphologically complex words. Sec-tion “Model 3: Relative Frequency between Base and the Derived Words” describes differentfrequency based models and their performance in predicting the processing mechanismsof Bangla polymorphemic words; section “Exploring the Role of Suffixes in Processing ofBangla Words” describes the newer models of word recognition; sect. “Model-6: CombiningModel-3, Model-4, and Model-5” concludes the paper by summarizing the observations anddiscusses the findings.Psycholinguistic Study of Bangla Polymorphemic Words through Masked PrimingExperimentsIn order to study the effect of priming on morphologically derived words in Bangla, weexecute the masked priming experiment as discussed in Forster and Davis (1984), Rastleet al. (2000), Marslen-Wilson et al. (2008) for Bangla derivationally suffixed words. In thistechnique the prime is placed between a forward pattern mask and the target stimulus, whichacts as a backward mask. This is illustrated below.Mask (500 ms) ########Prime (72 ms) sonAli (GOLDEN)Target (500 ms) sonA (GOLD)The prime and the target words are either morphologically and/or semantically relatedor orthographically transparent to each other. A pair of word is said to be morphologicallyrelated if they meet the following conditions:a) One word is the derived form of the otherb) The derived form has a recognizable suffixFor example, the word pairs bADiOyAlA (House keeper) and bADi (House) are morphologi-cally related since, bADiOyAlA is derived from bADi and has a recognizable suffix -OyAlA.A pair of word is said to be orthographically transparent if whole or a</s>
<s>significant part of oneword is fully or partly contained in the other word. Orthographically transparent words mayor may not be morphologically or semantically related to each other. For example, maShA(mosquito) and maShAla (flame) are orthographically transparent but morphologically notwhereas, our previous example, bADIOyAlA (House Master) and bADi (House) are bothorthographically transparent and morphologically related.After presenting the target probe, the subjects were asked to make a lexical decisionwhether the given target is a valid word in that language. The same target word is again123J Psycholinguist Resprobed after a random amount of time, but with a different visual probe called the controlword. The control words do not have any morphological, orthographic or semantic relatednesswith the target. For example, baYaska (aged) and baYasa (age) is a prime-target pair, for whichthe corresponding control-target pair could be naYana (eye) and baYasa (age).1The time taken by a subject to complete the lexical decision task after the visual presen-tation of the target is defined as the response time (RT). The RTs between a prime-targetand the corresponding control-target pair are compared to identify whether there is enoughevidence of morphologically structured lexical representation. Experiments in English andother languages show that in general the RT between the prime-target pair is significantlyless than that of the control-target pair, implying the presence of morphological primingeffect. Nevertheless, all linguistically apparent morphological processes need not have equalpriming effects or any effect at all.Materials and MethodsWe selected 500 prime-target pairs, where the primes are related to the targets either in termsof morphology, semantics and/or orthography. In order to factor out the effects of semantics ororthography, we adopted the same technique discussed in Rastle et al. (2000), Marslen-Wilsonet al. (2008) and classified the words into five different classes each consists of 100 wordpairs. Words in these classes are classified according to their morphological, semantic andorthographical relationship. For example, class-I words or [M+S+O+] consists of word pairsthat are morphologically (M+), semantically (S+) as well as orthographically (O+) related.Here, the “+” (as in M+) indicates relatedness and “−” indicates unrelatedness. Similarly,words that are morphologically unrelated but orthographically related will be represented as[M−S−O+] and so on. We also introduces a special class of words [M’+S−O+] which aresimilar to the word class [M−S−O+], however, this words consists of a valid and transparentBangla suffixes. For example, words like, AmadAni(import) consists of a valid Bangla suffix(-dAni) and a valid stem Ama(Mango). However, AmadAni and Ama does not have anymorphological connection among them. These classes of words have been introduced toobserve the priming phenomena for pseudo suffixed words. Table 2 describes these fiveclasses with examples.It is interesting to note that while it is very easy to collect word pairs belonging to classI, it is hard to come up with morphologically derived word forms in Bangla which areorthographically unrelated. In fact, almost all the native Bangla suffixes (e.g., -A, -I, -li, -oYA)do not change the form of the root to which it attaches. However, there are some derivationalprocesses inherited from Sanskrit, where the root forms are phonologically distinct from thederived ones, e.g., hatyA (to</s>
<s>kill)–hi.nsA (violence, i.e., desire to kill).For each of the 500 target words, we selected another set of 500 control words. Thesecontrol words are similar to the prime words in terms of word length, and number of syllables.However, they are neither morphologically related nor orthographically transparent to thetargets. Some statistics about the prime, target, and control words are presented in Table 1.As discussed earlier, after hearing the auditory prime, a visual probe is presented to thesubjects based on which some lexical decision have to be made. Thus, it is essential to restrictthe subjects to make any strategic guess about the relationship between prime and the targetword pairs. This can be achieved by introducing some filler in between the actual prime-targetor control-target pairs. We constructed a set of 500 filler pairs which can be categorized into1 This study follows the experiment 1 of Rastle et al. (2000); however, for the sake of readability we brieflydescribe the design process and other details.123J Psycholinguist ResTable 1 Statistics of the target, prime and control wordsWord type Avg. word length Avg. no. of complex characters Avg. corpus frequencyTarget 4.0 (2.0) 0.260 (0.11) 32.63 (10.10)Prime 6.4 (1.9) 0.464 (0.29) 25.82 (7.04)Control 6.2 (1.2) 0.472 (0.12) 25.14 (8.33)Number in parenthesis signifies standard deviationsthe following five sets of 100 word pairs each: (a) where the prime is a valid word but thetarget is not, although it is orthographically contained in the prime and is obtained by deletingsome word final character-string, e.g., kapAla (fore-head)–kapA (non-word); (b) where thetarget is a valid word but the prime is not, although it orthographically contains the targetand is derived by adding a suffix to the target, e.g., hAtAri (non-word)–hAta (hand); wherethe prime is valid but the target is not, and is obtained by swapping the individual alphabetsof the target, e.g., pAgalAmo (madness)–pAlaga (non-word); and (c) where both the primeand target are valid words without any morphological and phonological relatedness.Thus, all together, there are 1,500 word pairs including 500 prime-target pairs, 500 control-target pairs and 500 fillers. Before presenting the word pairs to each subject, they are ran-domized and divided into two set, such that the prime-target pair and the correspondingcontrol-target pair are present in different sets. Moreover, each set contains exactly half ofthe prime-target and half of the control-target pairs.ProcedureThe experiment was conducted using the DMDX software tool.2 Corresponding to eachvisual probe, subjects had 3,000 ms to perform the lexical decision after which the systempresents the next masked prime followed by a visual stimulus. The subject performs thedecision task by pressing either the “K” button (for valid word) or the “S” button (for invalidword) of a standard QWERTY keyboard. The system automatically records the reaction time(RT), which in this case is the time between the onset of the visual probe and pressing of oneof the keys by the subject.Before starting the real experiment all the subjects were given a short training aboutthe task. A trial run was also performed using the separately collected 20 trial word pairs.As discussed earlier, the experiment is divided into</s>
<s>five different phases. The experimentalprocedure for both the phases is same except that the prime and control words are different.The duration of each phase is about 25 min. Since a continuous session of 25 min require alot of attention and is tiring for the subjects, we further divided each phase of the experimentinto five small sessions of five minutes each. There was a break for ten minutes between thesessions.ParticipantsThe experiments were conducted on 32 highly educated native Bangla speakers; 27 of themhave a graduate degree and 5 hold a post graduate degree. The age of the subjects variesbetween 22 and 35 years (Table 2).2 http://www.u.arizona.edu/~kforster/dmdx/dmdx.htm.123http://www.u.arizona.edu/~kforster/dmdx/dmdx.htmJ Psycholinguist ResTable 2 Dataset for the experimentClass Explanation ExamplesM+S+O+ Morphologically derived, stem and suffix aretransparent and decomposable,semantically and orthographically relatednibAsa (residence)-nibAsi (resident)M+S+O− Morphologically derived, stem and suffix areopaque, semantically related butorthographically notmitra (friend)-maitri (friendship)M’+S−O+ Morphologically unrelated, transparent stemand suffix, semantically unrelated butorthographically relatedAma (Mango)-AmadAni (import)M−S+O− Semantically related but Morphologicallyand Orthographically unrelatedjantu (Animal)-bAgha (Tiger)M−S−O+ Morphologically and semantically unrelatedbut orthographically relatedghaDi (watch)-ghaDiYAla(crocodile)ResultsThe RTs with extreme values and those for incorrect lexical decisions (about 1.8 %) wereexcluded from the data.3 We have also discarded one prime-target pair from our dataset dueto its incorrect spelling. Further, four subjects have to be excluded from the experiment dueto their inconsistent and extremely high error rates. Overall, we have analyzed the RTs of 490prime-target and 490 control-target pairs for a total of 28 subjects. Table 3 summarizes theaverage RTs for the prime and control sets for the five classes. The RT and error rate data weresubmitted to by-subject and by-item analyses of variance with the following main factors:priming relation (prime vs. control) and relation classes (M+S+O+, M+S+O−, M’+S−O+,M-S+O−, and M−S−O+).We observed that, overall, the average RTs for Bangla control-target pairs are morethan the corresponding prime-target ones. In other words, priming relation had a signif-icant effect over the control relations. We have computed the by subject and by item F-scores as F1 (1, 23) = 32.42, p < .002; F2 (1, 485) = 48.93, p < .005. “Correct”responses to targets were faster when they appeared after the primes than unrelated con-trols. The priming effects of individual classes along with their significance values aredepicted in Table 3. To summarize, strong priming effects are observed when the targetword is morphologically derived and has a recognizable suffix, semantically and ortho-graphically related with respect to the prime[M+S+O+] (F1(1, 23) = 18.21, p = 0.001,F2(1, 96) = 21.13, p < 0.03); although statistically significant, but week priming isobserved for word pairs belonging to [M+S+O−], and [M’+S−O+]; no priming effectsare observed when the prime and target words are orthographically related but share nomorphological or semantic relationship [M−S−O+] (F1(1, 23) = 17.34, p = 0.006,F2(1, 96) = 13.47, p < 0.004) or only semantically related but without any morpholog-ical or orthographic relation[M−S+O−]. These results thus rules out the possibility thatpriming in [M+S+O+] could be due to individual effects of orthographical or semantic relat-edness.3 Any RT value that falls outside the range of Average RT 500 ms is considered as extreme.123J</s>
<s>Psycholinguist ResTable 3 Average RT for the word classes, the F-Score and p valuesClass Avg. RT (in ms) and error rates (in %) ANOVAPrime Error Control Error Diff F-score p value[M+S+O+] 523 2.40 589 1.20 66 F1(1, 23) = 18.21 p < 0.001F2(1, 96) = 21.13 p < 0.030[M+S+O−] 653 2.00 660 1.60 7 F1(1, 23) = 10.04 p = 0.07F2(1, 94) = 13.13 p < 0.06[M’+S−O+] 554 2.49 542 1.86 12 F1(1, 23) = 12.42 p < 0.009F2(1, 94) = 11.93 p < 0.040[M−S+O−] 606 3.12 597 2.11 −19 F1(1, 23) = 17.56 p < 0.02F2(1, 96) = 18.39 p < 0.005[M−S−O+] 690 3.69 657 3.64 −43 F1(1, 23) = 19.67 p = 0.001F2(1, 95) = 15.53 p < 0.008Analysis of RTs for Lexical ItemsIt is interesting to look at the individual lexical items whose priming behavior deviatesfrom that of their class. For instance, akarma (useless work)–akarmaNyA (worthless girl),pAkA (smart)–pAkAmo (street smartness), srama (labour)–sramika (worker) and kShamA(forgiveness)–kShamaNIYa (forgivable) exhibit the least priming effect in [M+S+O+].In [M+S+O−] class, prime-target pairs like, pAna (to drink)–pipAsA (thirsty), dharA(hold)–dhairya (patience) and chalA (move)–chAlita (controlled) show no priming effectdespite there is a strong morphological association between the prime-target pairs. In general,we observe that participants are unable to recognize the morphological connection betweenmost of the derivationally suffixed word pairs in the [M+S+O−] class. Examples includesuhRRida (friend)–souhArdya (friendship), uchit (appropriate)–auchitya (appropriateness)and hatyA (murder)–hi.nsA (violence). One explanation for this is, Bangla inherits thesemorphological forms from Sanskrit and the derivational process is unknown.Another important observation from the experiment is that a significant number (around38 %) of prime-target pairs belonging to the [M+S+O+] class shows weak or no prim-ing despite their high morphological association. For example, pairs like, ghana (dense)–ghanatba (density) and ga ∼ Nga (river Ganges) – ga ∼ Ngajala (water from Ganges),jiba (living being)–jibanta (alive), chora (thief)–chorAI (smuggled) etc. shows very weakpriming effect. In order to eliminate possible experimental errors, we repeated the same prim-ing experiment with these words to the same set of subjects and obtained the same result forall the pairs (although the average RT for some of the target deviates from that of the originalresult but this did not change the overall results).Analysis of High RT Lexical ItemsWe also observed that the RTs for certain pair of words were significantly higher than whatone would expect and consistently so across all the participants. Manual inspection of thesewords indicates that the target or the corresponding prime/control words in such cases haveone of the following properties:123J Psycholinguist ResTable 4 List of Bangla wordshaving conjugate characters andtheir average RT across 28subjectsNumber inside parenthesissignifies standard deviationsWord Corpus Frequency Length Avg. RTjIbanta(alive) 26 6 641 (67)bayaska(old) 34 6 773 (73)hindustAna(India) 1 11 754 (98)ghanatba(density) 6 5 774 (76)(sUryAsta)(sunset) 20 9 846 (53)kerAnigiri(clarkship) 8 10 1,078 (102)lambAi(length) 1 6 1,132 (111)rAShTrIYa(national) 113 9 1,227 (94)a) Very infrequentb) Long in terms of the number of characters present (>7)c) Presence of certain conjugates such as (Sh+T), (l + p) and (∼ N +g), and other irregularor non-transparent glyphs (g + u) and (h+RRi)</s>
<s>in the targetd) Incorrect spelling of the target (e.g., sharira instead of sharIra)Frequency effects on recognition time are well studied (Forster and Davis 1984; Taft 2004)and explain observation (a). It is quite well known that visual word recognition time andaccuracy depends on several factors such as, font size, font type, eccentricity, i.e., the angleof the visually represented word from the focus of the eye, and the crowding effect, i.e.,the physical length of a word [see, e.g., Jo (2000)]. Therefore, observation (b) is also notsurprising. However, the last two observations are specific to Bangla orthography and throwup some interesting research questions.The Bangla script uses a large number of non-transparent glyphs for conjugates andalso some consonant-vowel pairs. These glyphs have been a point of discussion amongstthe scholars of Bangla language, especially for pedagogical reasons: non-transparency incharacter representation leads to poor recognition and recall of the glyphs as well as the wordscontaining them; this negatively affects the learning process in young children. Therefore,there have been proposals for using the less common but easy to recognize transparent formsof these glyphs. We do not know of any systematic study that explores and quantifies thecognitive load associated with the learning and processing of the glyphs with varying degreeof transparency. Since such a study is beyond the scope of the current work, the experimentalitems were not prepared to specifically identify glyph recognition complexities. Nevertheless,we do observe an effect of glyph transparency and glyph usage frequency on word recognitiontime. Uncommon and non-transparent glyphs (e.g., (Sh+T)) have highest recognition time,whereas very frequent glyphs (e.g., (k+Sh)), even if non-transparent do not seem to have anegative effect on the recognition time of the words. Table 4 depicts a list of Bangla wordscontaining different conjugate characters and their average RT over the 28 subjects.High recognition time and error for incorrect spellings, or non-words, is a well-knownfact. However, it is interesting in the context of Bangla because Bangla does not distinguishbetween short and long vowels in pronunciation, even though the distinctions are traditionallymaintained in the written forms. Recently, there have been several controversial proposalsfor spelling reforms where all long vowels are to be replaced by their shorter counterparts.The unintentional error in our dataset, sharira (body) instead of the more commonly foundand popularly acceptable form sharIra, was accidentally discovered when we observed veryhigh RT for the pairs involving this item as the target. Thus, it might be argued that speakerswho have learnt the traditional spellings will find it hard to recognize their new spellings.123J Psycholinguist ResTable 5 Comparison of RTs between Bangla words with their conventional and un-conventional spellingforms. Number inside parenthesis signifies standard deviationsIn order to extend this argument we have conducted a separate lexical decision experiment.Here, we chose 80 Bangla words that have different accepted spelling conventions. The wordswere shown to 21 subjects using the procedure as discussed in Baayen et al. (1997), Taft(2004). We asked subjects to recognize whether a given word is valid or not. Similar to thepriming experiment discussed above, we have recorded the reaction time of individual wordsper</s>
<s>subject. An illustration of some typical Bangla words and their average RT is depictedin Table 5. We found that, for most of the cases the RT of those words that exhibits a morecommon form of representation is significantly lower than the words having an uncommonrepresentation F1(1, 20) = 11.4, p < 0.05; F2(1, 80) = 23.11, p < 0.02. This is not asurprising conclusion, though the exact nature and extent of difficulty in perceiving the newforms is a topic of further research.Analysis of Error RatesDuring priming experiments, participants can make an incorrect lexical decision on whethera word is valid or invalid. The errors could be due a participant’s incorrect judgment aboutvalidity of a word or a wrong selection made despite of a correct judgment. In general, it hasbeen observed that error rates and RT for non-words are higher than valid words. Table 6reports the error rates and RT for the prime-target, control-target and the fillers. As expected,we observe high error rates and high RT for fillers, which mostly consist of non-words astarget or prime. In fact 81 % of the total errors for the fillers are for the non-words. The overallerror rate, however, is quite low.Recall that test of significance for individual subjects revealed 28 out of 32 participantsshowed statistically significant priming effects (p < 0.03), which led us to hypothesize thatthe remaining four participants were not paying good attention during the experiments or arenot well exposed to Bangla due to their educational medium.123J Psycholinguist ResTable 6 Comparison of the RTand error rates between prime,control and fillersClass Average RT (m.sec) Error (%)Prime 579 1.2Control 654 1.9Fillers 1,011 6.2Fig. 1 Comparison of error rates across word classesFig. 2 Comparison of error ratesfor different categories of lexicalitems. The gray and the whitecells are respectively forparticipants who displayedsignificant and insignificantpriming effectsTherefore, we would expect their error rates to be higher than that of the other 28 partici-pants. Figure 1 plots the histogram of error rates for the significantly primed (left bars) andnon-significantly primed participants. Overall error rate of the former class of participants(41 %) is much less than that of the latter (59 %), which matches our speculation. Again,as one would expect, the maximum errors are made for fillers. Among the valid words, thehighest error rates are observed for the class [M−S+O−] and [M−S−O+] (see Fig. 2). Recallthat these are the classes for which we do not observe any priming effect.123J Psycholinguist ResDiscussionAs explained earlier, the effect of priming with a morphologically derived word vindicatesdecomposition, leading to reduced RT of the target. However, it is apparent from the aboveresults that all polymorphemic words do not decompose during processing. This contradictsthe obligatory decomposition model of Taft and Forster (1975), Taft (2004). Naturally, thequestion that arises is what are the other factors that are responsible for the decompositionof Bangla polymorphemic words? In order to answer this we need to further investigate theprocessing phenomena of Bangla derived words. One notable means is to identify whether thestem or suffix frequency of a polymorphemic word is involved in</s>
<s>the processing stage of thatword. For this, we apply the existing frequency based models to the Bangla polymorphemicwords and try to evaluate their performance by comparing their predicted results with theresult obtained through the priming experiment.Applying Frequency Models to Bangla Polymorphemic WordsModel-1: Base Word Frequency EffectsThe base word frequency model states that the probability of decomposition of a Banglapolymorphemic word depends upon the frequency of its constituent stem. Thus, a polymor-phemic word that constitutes a high frequency stem will be decomposed faster than a wordhaving low stem frequency. In order to compare the results with respect to that of the maskedpriming experiment discussed in the previous section, we made a slight change to the origi-nal model. We propose that if the stem frequency of a polymorphemic word crosses a giventhreshold value τ , then the word will decomposed into its constituent morpheme. The modelis formally represented as:Decomposabili t y (w) =TRUE, i f log10 ( f requency (Wstem)) ≥ τFALSE, i f log10( f requency(Wstem)) ≤ τThe value of τ is computed as the log of average base word frequency of Bangla words from acorpus4. This returns the value of τ as 0.09. We apply model-1 to a set of 500 morphologicallyderived words. According to model-1, words like, pathika (318),5 jalA (15), bADiwAlA (19),and baYaska (34) will be decomposed into their constituent stem and suffixes during theprocessing stage. The reason behind this is that, all these words are derived from very highfrequency stems like, patha (2241), jala (1736), and bADi (1118). Thus, priming phenomenawill be observed if these stems (considered as targets) are preceded by the derived words (i.ethe primes). Since, prior exposure of the prime will result in decomposition of the derivedprime word into its morphemes and thus the recognition of the target will start well beforethe actual target is probed. Similarly, according to model-1, derived words like ginnipanA,rAjakIYa, and nibAsi will not be primed and thus not be decomposed during the processingstage of the Bangla polymorphemic words. The predicted values of the model are evaluatedwith respect to the results obtained from the priming experiment discussed in section. Theperformance of the model is computed in terms of Precision, Recall, F-Measure and Accuracy.The confusion matrix along with the computed results is depicted in Table 7. We observed4 Corpus frequency is computed by combining the CIIL, and Anandabazar corpus and literary works ofRabindranath Tagore, and Bankim Chandra available from (www.ciil.org, iitkgp.ernet.in and nltr.org).5 Number in the parenthesis represents the frequency of a word in the corpus.123http://www.ciil.orghttp://iitkgp.ernet.inhttp://nltr.orgJ Psycholinguist ResTable 7 Summarizing the resultsof base word frequency modelModel-1: Baseword frequency(BF) (values out of500 words)PerformanceFalse positive 135 Precision (%) 60True negative 111 Recall (%) 78True positive 199 F-Measure (%) 68False negative 56 Accuracy (%) 62that the model possess an accuracy of 62 %. However, from the Table 4 we observe thefalse positive and false negative values to be around 26 and 11 % respectively. This indicatesfor these 26 % of the words, the base word frequency model predicts no morphologicaldecomposition due to extremely low</s>
<s>base word frequency (ranges between 1 and 7 out of4 million) but the priming experiment shows high degree of morphological decomposition.Similarly, for the On the other hand, for about 11 % of the word the model fails to explain whyaround 26 % (like, ekShatama, juYADi and rAjakiYa) words having extremely low base wordfrequency (ranges between 1 and 7)shows high degree of priming. Moreover, the model alsofails to explain the negative decomposability of 11 % words (like, laThiYAla, dAktArakhAnA,and Alokita) despite having high root word frequencies (ranges between 100 and 1,100).Hence, in the next section we proceed to experiment with the derived word frequency modelto get a better model that can be used to explain the above exceptions.Model-2: Derived Word Frequency EffectIn this model we try to validate the priming phenomena with respect to the whole wordfrequency. The hypothesis is that, if a specific morphologically complex form is above acertain threshold of frequency, then the whole word access will be preferred instead ofdecomposition model, and thus no priming effect will be visible in this case. On the otherhand if the derived word frequency is below that same threshold of frequency, the parsingroute will be preferred, and the word will be accessed via its parts. The derived word frequencymodel can be formally represented as:Decomposabili t y (w) =TRUE, i f log10 ( f requency (w)) ≤ τFALSE, i f log10( f requency(w)) ≥ τIn order to apply this model to Bangla polymorphemic words, we have computed the thresholdvalue to be the average corpus frequency of words which comes out to be 1.33. Therefore,a Bangla morphologically complex word whose surface frequency exceeds the thresholdlimit of τ will be accessed as a whole otherwise; it will be decomposed into its parts. Forexample, words like sonAli(179), galAbAji (334), and suryAsta (407) must be processedas a whole and words like, ginnipanA, juYA.Di, and ekaShatama will be parsed into theirconstituent morphemes namely ginni, juYA, and ekaSha. Similar to the approach discussedin model-1, the same 500 polymorphemic words were given as an input to the model. Thepredicted values of the model are then compared with the actual data collected from thepriming experiment (see Table 8 for the confusion matrix along with the computed results).From the results depicted at Table 8, we observe that the model can be used to explain thepossible decomposition of low frequency derived words (like, juYA.Di, nishThAbAna, andekaShatama) which model-1 fails to explain. Thus, the false positive value for the presentmodel is lower than that of model-1 (21 %). However, model-2 performs poorly due to the123J Psycholinguist ResTable 8 Summarizing the resultsof surface word frequency modelModel-2: surfaceword frequencymodel (SF) (valuesout of 500 words)PerformanceFalse positive 111 Precision (%) 58True negative 88 Recall (%) 51True positive 155 F-Measure (%) 54False negative 143 Accuracy (%) 49high false negative value (28 %). This implies the model fails to recognize the potentiallydecomposable words (like, meghalA, pAkAmo and AkAShamandala) properly.DiscussionFrom the above results we observe that, Model-1 predicts that the priming/decompositionwill take place if the base word frequency is high, irrespective</s>
<s>of the frequency of the prime.However, the prediction of the model was not validated when the prime as well as thetarget words are both having high frequency. On the other hand, Model-2 predicts that prim-ing/decomposition will take place if the prime is of low frequency. However, the model wasnot validated from the experimental results for low frequency prime and low frequent targetpairs. Hence, the two extremes of paring call for a newer model.Model 3: Relative Frequency between Base and the Derived WordsIn a pursuit towards an extended model, we combine the model 1 and 2 together to observe ifand how their combination can predict the parsing phenomena. One way to combine the baseand derived word frequency is through regression analysis. In accordance to the techniquediscussed in Hay and Baayen (2001), we took the log of frequency of both the base and thederived words and plotted their values in a log-log scale. In order to get the best-fit curveover the given dataset we have used the least square fit regression method, the equation ofthe straight line being:log10(BaseFrequency) = 0.346 × log10(Sur f aceFrequency) + 1.611We propose that any point that falls above the regression line will be parsed into its constituentmorphemes during processing. On the other hand, points situated below the regression linewill be accessed as a whole. In other words, given the surface frequency of a derived word W,the equation above can predict the frequency of the corresponding base word. If the predictedfrequency of the base word is greater than the actual frequency of the Base word then thepoint lies above the regression line and thus, during processing these words will be accessedvia the decomposition model. This is depicted in Fig. 3 which illustrates the surface and baseword frequency distribution of 2,000 Bangla polymorphemic words. The model predicts thatthose points that lie on or above the regression line will be parsed during processing whereaspoints lying below the regression line will be accessed as a whole. The results are depictedin Table 9. We observe that the model performs much better (with false negative and falsepositive values below 17 %) than the previous two models.123J Psycholinguist ResFig. 3 The relation between log derived frequency and log base frequency for 2,000 different Bangla poly-morphemic words. Solid line represents least squares fit regression lineTable 9 Summarizing the resultsof relative frequency modelModel-3: base andsurface wordfrequency ratio(values out of 500words)PerformanceFalse positive 88 Precision (%) 70True negative 143 Recall (%) 75True positive 199 F-Measure (%) 72False negative 67 Accuracy (%) 69We validate our model by comparing its predicted results with the results obtained fromthe masked priming experiment on 500 Bangla polymorphemic words. The results of thepredicted values of the model along with accuracy are depicted in Table 9. The present modelshows an accuracy of 69 %. Consequently a significantly high number of words (31 %) arewrongly classified by the present model. This may be accounted for by the fact that most ofthe derived words that could not be correctly classified by the present model are composed oflow</s>
<s>frequency stem and suffixes. This led us to further modify the existing model to study therole of individual suffixes during the morphological decomposition of Bangla polymorphemicwords.Exploring the Role of Suffixes in Processing of Bangla WordsOne of the key issues that have not been addressed in Model-3 is the fact that whetherthe regression analysis between base frequencies on derived frequency across suffixes willgenerate any variation in the slope and intercept of the resulting line. It has been observedthat, for English, regressing between base and derived word frequency generates different123J Psycholinguist ResFig. 4 The relation between log derived frequency and log base frequency for four affixes. The lines representleast squares regression linesslope and intercept values. Hay and Baayen (2001) showed that suffixes belonging to highintercept values shows higher tendency to decompose than suffixes with low intercept values.In this section, we would like to examine the same for Bangla. Therefore, we will tryto examine whether the regression analysis between base and derived frequency of Banglawords varies between suffixes and how these variations affect word decomposition. For this,we choose six different Bangla native suffixes with varying degree of token frequencies. Foreach suffix, we choose 10 different derived words. Figure 4 illustrates the chosen suffixescorresponding to different suffix classes and their base word and derived word frequencies.Finally, we plot the regression line between words under each suffix and found that theintercept of the regression line for Bangla suffix shows considerable variation.6 Figure 4illustrates the regression analysis of the six different Bangla suffixes. We observe that thosesuffixes having high value of intercept are forming derived words whose base frequencies aresubstantially high as compared to their derived forms. Moreover, we also observe that highintercept value for a given suffix indicates higher inclination towards decomposition ratherthan whole word access.From the above analysis we observe that decomposition of a Bangla polymorphemicword not only depends upon the base and derived word frequencies, but also depends uponthe characteristics of the given suffix. That is, whether or not a polymorphemic word willbe accessed via decomposition or by whole word access depends on several factors like thefrequency distribution between the base word and the derived word, type and token frequency6 Similar results were reported for the English suffixes in Hay and Baayen (2001).123J Psycholinguist Resof the suffix, and the degree of affixation between the base word or the stem and the suffix.Thus, in spite of having both derived and stem frequency ratio and suffix type/token ratiofalls below the threshold frequency τ , a Bangla polymorphemic word may not show thedecomposition phenomena due to the fact the degree of affixation between the stem andthe suffix may be weak. Therefore, in the following sections we will explore the degree ofaffixation between the stem and the affix. Accordingly, we will first identify the role of suffixfrequencies (type and token) in determining the decomposition of Bangla polymorphemicwords.Model-4: Suffix Type/Token Ratio ModelThe type frequency is defined as the total number of distinct words associated with an affix.On the other hand, token frequency of a suffix is the total</s>
<s>number of times a suffix is attachedwith a word. In this model the type token frequency ratio of individual suffixes was taken intoaccount to study the decomposition of Bangla polymorphemic words. As suggested earlier,lesser is the token frequency of a suffix greater is its chances in getting parsed in a wordattached with it. Type frequency of a suffix exhibits the potentiality of a suffix in forming anentirely new word. In other words, it is a count of how many different types of words a suffixcan derive from the base word. Taking the ratio between the type and the token frequencyof every suffix that can attach with a given stem, we determine the degree of affixation of agiven stem and a suffix. Through this information we try to predict the access mechanismsof Bangla polymorphemic words. We believe as the degree of affixation between a stem anda suffix decreases the higher is the probability of decomposition of the target derived word.Therefore, hypothesis for this model can be given as, for a given Bangla polymorphemicword if the type/token frequency ratio (in logarithmic scale) of a given suffix, attached to aword, exceeds a predefined threshold τ, then the word will be accessed as a whole otherwisethe derived word will be decomposed into the corresponding stem and suffix. The thresholdvalue for the surface and stem frequency ratio is computed by taking the average of the ratiobetween surface word and base word frequency of around 2,000 polymorphemic words. Weestimated the average and hence the threshold to be around 0.08. Therefore, the proposedmodel can be represented as:Decomposabili t y (w) =TRUE, i ff requency(T ype(Wsu f f i x))f requency(T oken(Wsu f f i x))≤ τFALSE, otherwiseSimilar to the previous models, our new model is evaluated over a set of 500 Bangla poly-morphemic words where the stem and the suffixes are transparent (i.e the suffix is fully orpartly recognizable). The performance of the model as presented in Table 11 shows 69 %accuracy.Although, model-4 does not throw any improvement over model-3 in terms of accuracy,we observed that model-4 performs best in determining the true negative values (see Table 11)and thus, can better predict those words which does not shows the decomposition phenomena.On the other hand, model-3 possesses a high precision of 70 % and can better detect the truepositive values (199) as compared to model-4. Therefore, despite of having same accuracy,both the model shows equal strength in classifying different types of word. This observationis further illustrated in Table 10 which depicts a list of words that were given as an input tomodel-3.From Table 10, we observe that words like, meghlA (CLOUDY), nibAsI (RESIDENT)and Alokita (SHINE) despite of having very week priming effects, are wrongly classified as123J Psycholinguist ResTable 10 List of sample prime target pairs given as an input to model-3 and model-4 and their performancePrime-targets Base/surfacefrequency ratioPriming type Model-3 result Model-4 resultjIbanta- jIba (lively–living) 0.47 0 1 0bA.DioYAlA- bA.Di (Housekeeper–House) 0.01 1 1 1bayaska- bayasa (Old–Age) 0.05 1 1 1nibAsI- nibAsa (Residence–Resident) 0.04 0 0 1meghalA- megha</s>
<s>(Cloudy–Cloud) 0.02 0 0 1Alokita- Alo (Lightning–Light) 0.02 0 0 1rAShTrIYa- rAShTra (National–Nation) 2.05 1 0 0nAchunI- nAcha (Dancer–Dance) 0.05 0 0 0Priming type = 1 implies significant degree of priming is observed for the word pairs, and priming type = 0implies no priming or less priming is observed. For Model-3 and Model-4 Result, 1 implies the model correctlyclassifies the decomposition of the derived word and 0 implies failure to classify the word correctlyTable 11 Summarizing theresults of Type/Token ratio modelModel-4: type/token ratio(values out of 500 words)PerformanceFalse positive 100 Precision (%) 50True negative 158 Recall (%) 85True positive 100 F-Measure (%) 63False negative 15 Accuracy (%) 69decomposable words because of their low base and surface words frequency ratios (0.04, 0.02,and 0.02 respectively). On the other hand, when these words are provided as an input to model-4, they have been correctly classified as non-decomposable. This may be accounted due tothe fact that suffixes attached to these words have got low type/token ratios (0.01, 0.03, and0.018 respectively) and thus difficult to decompose. However, both the proposed models failsto explain the decomposition of word like, rAShtriYa (NATIONAL) and non-decompositionof word like nAchuni (DANCER) which needed a more deeper analysis. Nevertheless, theabove experimental data and our observation further strengthen our claim that only base andsurface word frequencies are not the only factors responsible for the decomposition factor andsuffix properties plays equally important role in determining the decomposition of Banglapolymorphemic words in the mental lexicon. Hence, we argue that combining the above twomodels can better predict the decomposability of Bangla polymorphemic words. But, beforethat we would further like to analyze whether along with the type/token ratio, the productivityof a suffix plays any role in morphological decomposition (Table 11).Model-5: Suffix Productivity in Morphological DecompositionIn this section our objective is to identify the degree of affixation of a given suffix and aword. In other word, we try to compute how well a given suffix can be attached with a givenstem. This is done by computing the productivity of a suffix. Although it has been proposedthat suffix type frequency can be a determiner of its productivity, yet it has been argued thatproductivity is multifaceted and can be assessed in different ways (Hay and Plag 2004). We,in this paper apply the same technique as proposed by Hay and Plag (2004) to compute theproductivity of Bangla suffixes. There are mainly three components of productivity, P, P*,123J Psycholinguist ResTable 12 Correlation between the suffix type frequency, token frequency, happex count and conditioneddegree of productivityType frequency Type frequency Type frequency Type frequencyType frequency – 0.97 0.91 −0.726Token frequency – – 0.909 −0.694Happex – – – −0.701Productivity – – – –and V. V is the “type frequency” of a suffix. That is, the number of different type of wordswith which the suffix is attached. P is the “conditioned degree of productivity” and is theprobability that we are encountering a word with a suffix(S) and it is representing a new type.The productivity of a suffix S (denoted as P(S)) is therefore computed as:Productivi t</s>
<s>y (Si ) = P (W |Si ∩ f requency (w) = 1) = Hcount (S)= Number of happex wi th that su f f i xNumber of token containing the su f f i x(N )Where, Hcount is the number of hapaxes with the given affix S and NS is the number of tokenscontaining the suffix (N). Hapaxes are those words which occur exactly once in the corpus.Hapaxes and their counts are important in linguistics because they reveal how potential asuffix is in forming an entirely new word, what is its strength in producing new and rarewords.P* is the “hapaxed-conditioned degree of productivity”. It expresses the probability thatwhen an entirely new word is encountered it will contain the suffix. It is measured by calcu-lating all hapaxes in the corpus with that affix / total number of hapaxes in the corpus. Thus,P* is computed as:P∗ = P (Happex |Si ) = Number of happex in the cor pus wi th the su f f i x SiT otal Number of happex in the cor pusFinally, we add P and P* to get the productivity value of every suffix. We have chosen 27suffixes, 9 of them are very frequent (type frequency ranges from 1,000 to 1,700 words andtoken frequency 3,000–7,000), 9 are moderately frequent and the rest are least frequent (typefrequency below 100 and token frequency below 500). For every suffix, we have computedthe type and token frequencies, the number of hapex count and their productivity. We alsocomputed the correlation between the above factors (see Table 12).We found that, for Bangla, both type and token frequencies significantly correlates amongthemselves as well as with the happex count which again is inversely correlates to the produc-tivity of the suffix. This implies as the type/token frequency of a suffix increases the higherare the chances of the suffix to form happexes. Although a negative correlation is observedbetween type/token frequencies, happex count with the suffix productivity, however, no sig-nificant correlation could be drawn between them. Therefore, we aim to identify the role ofsuffix productivity in the processing of words in the mental lexicon. Accordingly, we com-puted both the conditioned degree of productivity (P) and hapaxed-conditioned degree ofproductivity (P*) and finally plotted a regression curve between them. The equation of theregression line is depicted in the equation below:P = 0.040 × P∗ − 0.124123J Psycholinguist ResTable 13 Summarizing theresults of suffix productivitymodelModel-5: suffixproductivity(values out of500 words)PerformanceFalse positive 44 Precision (%) 84True negative 129 Recall (%) 73True positive 240 F-Measure (%) 73False negative 87 Accuracy (%) 74We hypothesized that, any point lying above the regression line will be processed via decom-position otherwise they will be processed as a whole. We have evaluated our model with thesame set of 500 Bangla polymorphemic words that has been used for the priming experiments.Table 13 depicts the overall result of the evaluation. We observed that as the productivity ofthe suffix increases, the probability of decomposition of a word also get increases. For exam-ple, we observe that the suffixes “-wAlA”, ”-giri”, “-tba”, and</s>
<s>“-panA” are highly productive(ranges between 0.6 and 0.9) as compared to the suffixes “-A”, “-Ani”, “-tama”, and “-I”.Therefore, words having productive suffixes will be more prone to decomposition than theless productive ones. We validate our model with the same 500 words that has been used tovalidate the previous models. We found an accuracy of around “74 %”.One important observation that can be made from Tables 11 and 13 is that, both the model4 and model 5 performs best in determining the true negative values. It is also observed thatModel-4 possess a high recall value of (85 %) but having a low precision of (50 %) on the otherhand results of Model-3 and model-5 possess a high precision of 70 and 84 % respectively.This implies, model-4 can accurately predict those words for which decomposition will nottake place. On the other hand model-3 and model-5 accurately identifies those words for whichdecomposition will occur. Thus, we argue that combining the above three models togethercan enhance the performance. Hence, in the next section we will present a new model thatcombines the power of the above three models in determining the decomposability of Banglapolymorphemic words.Model-6: Combining Model-3, Model-4, and Model-5From the discussions of the last section we combined model 3, 4 and 5 together to get anew enhanced model. The combination of the models were done by performing both logicalAND an logical OR operation on the outputs of Model-3, Model-4 and Model-5. We observethat performance of the OR operation results in a slightly improved accuracy, but both ofthem are comparable. Thus we have considered performing the logical OR operation overthe feature models. This is represented as:Decomposabili t y (w) =T RU E, i f (M3 (w)M4 (w)M5 (w) = 1F AL SE, otherwiseSimilar to the earlier models, we evaluate Model-6 with the same 500 words used in earliermodels. The results are depicted in Table 14 (column 7). A comparison of results of ourfinal proposed model with that of the existing ones is depicted in TableResultTable1. Theperformance of our final model shows an accuracy of 80 % with a precision of 87 % anda recall of 78 %. This outperforms the performance of the other models discussed earliersections. However, around 22 % of the test words that include words like, rAShTrIya, nAchuni,123J Psycholinguist ResTable 14 Summarizing the comparative results of the existing frequency based models and our proposedmodelsM1 BF M2 SF M3 LOG(SF) versusLOG (BF)TYP/TKNvsSF/BFM5 P, P*,V M6 COMBINEDFalse positive 135 111 88 133 44 32True negative 111 88 143 212 129 175True positive 199 155 199 133 240 228False negative 56 143 67 20 87 64Precision (%) 60 58 70 50 84 87Recall (%) 78 51 75 75 73 78F-Measure (%) 68 54 72 60 74 82Accuracy (%) 62 49 68 69 74 80M-1 to M-6 corresponds to Model-1 to Model-6. BF = Base frequency model, SF = Surface frequency model,TYP = Suffix type frequency, TKN = Suffix token frequency, Combine = Combining models 3, and 4 togethernishThAbAna, and juyADi, were wrongly classified by Model-5</s>
<s>which the model fails tojustify. Thus, a more rigorous set of experiments and data analyses are required to predictaccess mechanisms of such Bangla polymorphemic words.General Discussion and ConclusionIn this paper we attempted to model the representation and processing of Bangla morpho-logically complex words. Our aim is to determine whether a Bangla polymorphemic wordis accessed as a whole or is it decomposed into its constituent morphemes and is recognizedaccordingly. We tried to answer this question through two different angles. First, we haveconducted a series of psycholinguistic experiments based on masked priming paradigm. Thereaction time of the subjects for recognizing various lexical items under appropriate condi-tioning reveals important facts about their organization in the brain which are discussed inthe paper.Our initial results show that morphologically related prime-target pairs do prime eachirrespective of their orthographic or semantic relatedness. On the other hand, prime-targetpairs that are morphologically opaque do not exhibit any priming effects even if they areorthographically or semantically related. Further, RT analysis of individual words showed thata significant number of Bangla polymorphemic words do not decompose during processing.These observations lead us to believe that mental representation and access of polymorphemicword in Bangla shows the partial decomposition model. We also observe that several otherfactors including word usage frequency, orthographic complexities, word length and spellingaffect the overall word recognition time and accuracy. Each of these factors call for rigorousexperimentation for understanding the exact nature of their inter dependencies.In the second approach, we tried developed a computational model that can predict therecognition process of Bangla polymorphemic words. In order to do so, we have exploredthe individual roles of different linguistic features of a Bangla morphologically complexword and accordingly proposed different feature models. We finally combine the indi-vidual feature models together and propose a new model that can accurately predict theprocessing of a Bangla morphologically complex word. The combination has been done by123J Psycholinguist Resperforming both logical OR and logical AND operation over the outputs of the individualfeature models. Performance of the logical OR operation is slightly better than that of theAND operation. Finally, we observed that, decomposition of Bangla morphologically com-plex words depends upon several factors like, the base and surface word frequency, suffixtype/token ratio, suffix family size and suffix productivity. The performance of the combinedmodel shows an accuracy of 80 % and this outperform the performance of the individualfeature models described in the paper. However, our proposed combined model (MODEL-6)fails to explain the processing phenomena of rest of the 20 % words for which further exper-iments and RT analysis are required. To the best of the knowledge of the authors there is noother work on computational modeling of Bangla polymorphemic words against which wecould benchmark our results.ReferencesAitchison, J. (2005). Words in the mind: An introduction to the mental lexicon. London: Taylor & Francis.Ambati, B., Dulam, G., Husain, S., & Indurkhya, B. (2009). Effect of jumbling the order of letters in a word onreading ability for indian languages: An eye-tracking study: Proceedings of the 31st Annual Conferenceof the Cognitive Science Society. Austin, TX: Cognitive Science</s>
<s>Society.Baayen, H. (2000). On frequency, transparency and productivity. In Booij, G., van Marle, J. (eds.) Yearbookof morphology, pp. 181–208.Baayen, R., Dijkstra, T., & Schreuder, R. (1997). Singulars and plurals in dutch: Evidence for a paralleldual-route model. Journal of Memory and Language, 37(1), 94–117.Baayen, R. H., Feldman, L. B., & Schreuder, R. (2006). Morphological influences on the recognition ofmonosyllabic monomorphemic words. Journal of Memory and Language, 55(2), 290–313.Bentin, S., & Feldman, L. (1990). The contribution of morphological and semantic relatedness to repetitionpriming at short and long lags: Evidence from hebrew. The Quarterly Journal of Experimental Psychology,42(4), 693–711.Bertram, R., Baayen, R. H., & Schreuder, R. (2000a). Effects of family size for complex words. Journal ofMemory and Language, 42(3), 390–405.Bertram, R., Schreuder, R., & Baayen, R. (2000b). The balance of storage and computation in morphologicalprocessing: The role of word formation type, affixal homonymy, and productivity. Journal of ExperimentalPsychology: Learning, Memory, and Cognition, 26(2), 489.Bodner, G., & Masson, M. (1997). Masked repetition priming of words and nonwords: Evidence for a nonlexicalbasis for priming. Journal of Memory and Language, 37, 268–293.Bradley, D. (1980). Lexical representation of derivational relation. Juncture, pp. 37–55.Burani, C., & Caramazza, A. (1987). Representation and processing of derived words. Language and CognitiveProcesses, 2(3–4), 217–227.Burani, C., & Laudanna, A. (1992). Units of representation for derived words in the lexicon. Advances inPsychology, 94, 361–376.Burani, C., Salmaso, D., & Caramazza, A. (1984). Morphological structure and lexical access. Visible Lan-guage, 18(4), 342–352.Caramazza, A., Laudanna, A., & Romani, C. (1988). Lexical access and inflectional morphology. Cognition,28(3), 297–332.Carlisle, J. F., & Katz, L. A. (2006). Effects of word and morpheme familiarity on reading of derived words.Reading and Writing, 19(7), 669–693.Colé, P., Beauvillain, C., & Segui, J. (1989). On the representation and processing of prefixed and suffixedderived words: A differential frequency effect. Journal of Memory and Language, 28(1), 1–13.Crepaldi, D., Rastle, K., Coltheart, M., & Nickels, L. (2010). ‘Fell’ primes ‘fall’, but does ‘bell’ prime ‘ball’?masked priming with irregularly-inflected primes. Journal of Memory and Language, 63(1), 83–99.Dasgupta, T., Choudhury, M., Bali, K., & Basu, A. (2010). Mental representation and access of polymorphemicwords in bangla: Evidence from cross-modal priming experiments. In International conference on naturallanguage processing.Davis, M., & Rastle, K. (2010). Form and meaning in early morphological processing: Comment on feldman,o’connor, and moscoso del prado martn (2009). Psychonomic Bulletin & Review, 17(5), 749–755.De Jong, N. H., Schreuder, R., & Harald Baayen, R. (2000). The morphological family size effect and mor-phology. Language and Cognitive Processes, 15(4–5), 329–365.123J Psycholinguist ResDrews, E., & Zwitserlood, P. (1995). Morphological and orthographic similarity in visual word recognition.Journal of Experimental Psychology: Human Perception and Performance, 21(5), 1098.Fellbaum, C. (2010). Wordnet. Theory and Applications of Ontology: Computer Applications, pp. 231–243.Ford, M., Davis, M., & Marslen-Wilson, W. (2010). Derivational morphology and base morpheme frequency.Journal of Memory and Language, 63(1), 117–130.Forster, K., & Davis, C. (1984). Repetition priming and frequency attenuation in lexical access. Journal ofExperimental Psychology: Learning, Memory, and Cognition, 10(4), 680.Frost, R., Forster, K., & Deutsch, A. (1997). What can we learn from the morphology of hebrew? A</s>
<s>masked-priming investigation of morphological representation. Journal of Experimental Psychology: Learning,Memory, and Cognition, 23(4), 829.Grainger, J., Colé, P., & Segui, J. (1991). Masked morphological priming in visual word recognition. Journalof Memory and Language, 30(3), 370–384.Hay, J. & Baayen, H.( 2001). Parsing and productivity. In Yearbook of morphology, p. 35.Hay, J., & Plag, I. (2004). What constrains possible suffix combinations? On the interaction of grammaticaland processing restrictions in derivational morphology. Natural Language & Linguistic Theory, 22(3),565–596.Jo, E. (2000). Crowding affects reading in peripheral vision. Intel Science Talent Search, 1–15.Marslen-Wilson, W., Bozic, M., & Randall, B. (2008). Early decomposition in visual word recognition:Dissociating morphology, form, and meaning. Language and Cognitive Processes, 23(3), 394–421.Marslen-Wilson, W., Tyler, L., et al. (1997). Dissociating types of mental computation. Nature, 387(6633),592–593.Marslen-Wilson, W., Tyler, L., Waksler, R., & Older, L. (1994). Morphology and meaning in the english mentallexicon. Psychological Review, 101(1), 3.Marslen-Wilson, W., & Zhou, X. (1999). Abstractness, allomorphy, and lexical architecture. Language andCognitive Processes, 14(4), 321–352.Milin, P., Kuperman, V., Kostic, A., & Baayen, R. (2009). Paradigms bit by bit: An information-theoreticapproach to the processing of paradigmatic structure in inflection and derivation. Analogy in Grammar:Form and Acquisition, pp. 214–252.Moscoso del Prado Martn, F., Deutsch, A., Frost, R., Schreuder, R., De Jong, N. H., et al. (2005). Changingplaces: A cross-language perspective on frequency and family size in dutch and hebrew. Journal of Memoryand Language, 53(4), 496–512.Pylkkänen, L., Feintuch, S., Hopkins, E., & Marantz, A. (2004). Neural correlates of the effects of morpho-logical family frequency and family size: An meg study. Cognition, 91(3), B35–B45.Rastle, K., Davis, M., Marslen-Wilson, W., & Tyler, L. (2000). Morphological and semantic effects in visualword recognition: A time-course study. Language and Cognitive Processes, 15(4–5), 507–537.Schreuder, R., & Baayen, R. (1997). How complex simplex words can be. Journal of Memory and Language,37, 118–139.Taft, M. (2004). Morphological decomposition and the reverse base frequency effect. Quarterly Journal ofExperimental Psychology Section A, 57(4), 745–765.Taft, M., & Forster, K. (1975). Lexical storage and retrieval of prefixed words. Journal of Verbal Learningand Verbal Behavior, 14(6), 638–647.123 Computational Modeling of Morphological Effects in Bangla Visual Word Recognition Abstract Introduction Psycholinguistic Study of Bangla Polymorphemic Words through Masked Priming Experiments Materials and Methods Procedure Participants Results Analysis of RTs for Lexical Items Analysis of High RT Lexical Items Analysis of Error Rates Discussion Applying Frequency Models to Bangla Polymorphemic Words Model-1: Base Word Frequency Effects Model-2: Derived Word Frequency Effect Discussion Model 3: Relative Frequency between Base and the Derived Words Exploring the Role of Suffixes in Processing of Bangla Words Model-4: Suffix Type/Token Ratio Model Model-5: Suffix Productivity in Morphological Decomposition Model-6: Combining Model-3, Model-4, and Model-5 General Discussion and Conclusion References</s>
<s>Paper Title (use style: paper title)Bengali Word Embeddings and It's Application inSolving Document Classification ProblemAdnan AhmadResearcher, Search Engine PipilikaDepartment of Computer Science and EngineeringShahjalal University of Science and TechnologySylhet, Bangladesh.adnan.ahmad@student.sust.eduMohammad Ruhul AminPhD student, Computer Science DepartmentStony Brook UniversityNY 11790, USAmoamin@cs.stonybrook.edu Abstract—In this paper, we present Bengali word embeddingsand it’s application in the classification of news documents. Wordembeddings are multi-dimensional vectors that can be created byexploiting the linguistic context of the words in large corpus. Togenerate the embeddings, we collected Bengali news document oflast five years from the major daily newspapers. Wordembeddings are generated using the Neural Network basedlanguage processing model Word2vec. We use the vectorrepresentations of the Bengali words to cluster them using K-means algorithm. We show that those clusters can be useddirectly to perform various natural language processing task bysolving the problem of Bengali news document classification. Weuse the Support Vector Machine (SVM) for the classification taskand achieve ~91% F1-score. The accuracy of our methoddemonstrates that our word embeddings could capture thesemantics of word from the respective context correctly. Keywords— Bengali, Word Embedding, Word2vec, DocumentClassification, Word ClusterI. INTRODUCTION In the recent years, word embeddings or the vectorrepresentation of the words have been proved to achievesignificant performance in the language modeling and in thenatural language processing (NLP) tasks [1]. The wordembedding of a word represent the word in a multi-dimensionalspace in which the semantically similar words are placed closerto each other and non-related words are placed far from oneanother [2][3][4]. Thus, these distributed vector representationscan be used to learn the abstract relationship among the wordsby using unsupervised clustering methods. The features ofthose clusters can be used very effectively to solve variousNLP tasks like document classification, sentiment analysis,parts-of-speech tagging, named entity recognition and machinetranslation etc [1][4]. Bengali is a highly inflected as well as morphologically richlanguage [5]. A slight modification in a word can change it’sform to express a completely different meaning from theoriginal one in terms of tense, mood, person, number andgender to name a few [5]. So, clustering words that sharessimilar concepts in Bengali is a very challenging task. Very fewattempts are taken to cluster Bengali words. Those attempts aremainly based on the N-gram language model [6], whereclusters are generated by considering the words with theirfrequency in a context up to trigram. The N-gram model onlyconsider the consecutive words and their relative frequencies ina N-gram window. The probability of a word in the context iscalculated only from the context of previous words. Thisprobability cannot be used to represent the distance orsimilarity among all the words in a language. Thus, N-grammodel cannot be used directly for clustering the semanticallysimilar words together, let alone solving the other NLPproblems in Bengali. In this paper, we present the application of Begnali wordembeddings to solve document classification problem inBengali. We create vector representation of Bengali wordsusing Word2vec model [2]. We use t-SNE, an efficientdimension reduction technique to map those multi-dimensionalvectors into two-dimensional space [7]. We then apply K-means clustering to find the clusters of word embeddings,those are found in close proximity in the multi-dimensionalspace</s>
<s>[8]. Finally, We use the cluster information of Bengaliword embeddings as features to solve the problem of Bengalinews document classification by using the machine learningalgorithm, support vector machine (SVM) [9]. Our modelachieve the accuracy of ~91% which justifies that the modelcan be be used successfully in solving many other NLPproblems in Bengali. Specifically, our contributions include:Largest Collection of Bengali word embeddings: We aregoing to release the largest collection of Bengali wordembeddings. To our knowledge, the only available wordembeddings for Bengali were published under the Polyglotproject from the Data Science Lab at State University of StonyBrook [4]. Polyglot used the Bengali contents in Wikipedia andcreated word embeddings for ~55,000 words. For our work, wecollected news contents of last five years from 13 majornewspapers and analyzed ~52,000,000 of lines to release wordembeddings for ~210,000 Bengali words.Document Classification without Preprocessing: Weshow that clustering information of Bengali words embeddingscan be used as a feature to solve Bengali documentclassification problem. Previously, it was considered thataccuracy of stemming and key word identification need to beimproved for preprocessing the document for better documentclassification. But, in this paper our method shows that we canuse the word embeddings directly for news documentclassification; hence, we show that document classification canbe done independently from the other preprocessing steps. II. BACKGROUND STUDYA. Bengali Word Embeddings Word-vectors or so-called distributed representationshave a long history by now, starting perhaps from work of S.Bengio et al [10] where he obtained word-vectors as by-product of training neural-net language model. A lot of relatedresearches demonstrated that these vectors do capturesemantic relationship between words [11]. Word2vec is apopular word embedding model which is created by using atwo layer Neural Network (NN) and skip-gram technique andsuccessfully used for many NLP tasks [1][2]. There are fewother popular word embedding models, namely, Polyglot,Glove and Gensim [3][4][12]. To the best of our knowledge,only Polyglot published the word embeddings for ~55,000Bengali words by from the Bengali wikipedia. Abhishek et al.created a neural lemmatizer using Bengali word embeddingsgenerated by Word2vec model [13] using a relatively smalldataset.B. Bengali Word Clustering A pioneer work on word clustering is proposed by Brownet al, where they used n-gram language model [14]. Brownclusters have been used successfully in a variety of NLPapplications [15]. Another attempt using n-gram model isreported by Korkmaz et al; they used a similarity function anda greedy algorithm to put the words into clusters [16]. Ding etal presented Naive Bayes method for English in classifyingwords using surrounding context words as features [17].Further, many other approaches have been reported inliterature for other languages like Russian, Arabic, Chineseand Japanese. As mentioned earlier, very few works has beendone on Bengali word clustering so far. Tanmoy et al proposedsemantic clustering of words using synset to identify Bengalimulti-word expressions [18]. Sabir et al proposed anunsupervised machine learning technique to develop Bengaliword clusters based on their semantic and contextual similarityusing N-gram language model [6]. C. Bengali Document classification For text classification in other languages, i.e. English,Chinese, Hindi, Arabic and European languages, variousnumber of supervised learning techniques has been used, suchas Association Rules</s>
<s>[19], Neural Network [20], K-NearestNeighbour [21], Decision Tree [22], Naïve Bays [23], SupportVector Machine [24], and N-grams [25] etc. Previous workson document classification for Bengali are mainly based on N-gram [26], Naïve Bays [27], Stochastic Gradient Descentbased classifier [28]. The features of word clusters have beenused to perform various NLP tasks for a long time. We cameacross a work of Y. Yuan et al, who used word clusters createdfrom Word2vec to perform document clustering in Chineselanguage by applying Support Vector Machine (SVM) [29]. III. METHODOLOGYA. Neural Network and Word2vec Words occurring in the same or similar contexts tend toconvey similar meaning. There are many approaches tocomputing semantic similarity between words based on theirdistribution in a corpus. Word2vec models are shallow, two-layer neural networks which is trained in the unsupervisedfashion to reconstruct linguistic context of words. Word2vectakes a large corpus of text as input for training and produces aset of vectors called embeddings, typically of several hundreddimensions, with each unique word in the corpus. Givenenough data, usage and contexts, Word2vec can make highlyaccurate guesses about a word’s meaning based on pastappearances. Word2vec produces word embeddings in one oftwo ways: either using context to predict a target word, amethod known as continuous bag of words, or CBOW; or usinga word to predict a target context, which is called Skip-gram(Figure 1).Fig. 1. Two ways to compute Word2vec model: 1. Continuous Bag of Words(CBOW) and 2. Skip-gram.To generate Bengali word embeddings, we use the skip-grammethod because Skip-gram works well with small amount ofthe training data and can represent well even rare words orphrases [2]. In our work, we create a Bengali Word2vec modelwhich contains online newspaper articles from 13 differentnewspaper of year 2010-2015, a collection of 2185701documents. To our best knowledge, this is the largest Word2vecmodel for Bengali language. We create two separate Word2vecmodels of dimension size 100 and 200, using defaultparameters. Also, we learned the vector for unknown word(UNK). Later, we use those word vectors to create wordclusters for Bengali and use those clusters in documentclassification task. B. Dimensionality Reduction and Clustering As Word2vec model represents words as vector, we candirectly apply K-means clustering algorithm on top of it. Butapplying K-means and performing all the calculations in highdimensional feature space is time-consuming. So, beforeapplying K-means, we use dimension reduction techniquecalled t-distributed Stochastic Neighbor Embedding (t-SNE) toreduce the dimension size of the vectors into two [7]. It is anonlinear dimensionality reduction technique that isparticularly well-suited for embedding high-dimensional datainto a space of two or three dimensions, which can then bevisualized in a scatter plot. This benefits the process in twoways: first, it takes less time to compute the clusters using K-means; second, clusters can be plotted and visualized into a 2Dplane. K-means clustering is a method of vector quantization,that is popular for cluster analysis in data mining. The wholeprocess is represented in the Figure 2. Fig. 2. Clustering the word embeddings: 1. Create word embeddings for eachword in a corpus, 2. Reduce the vector dimension, 3. Create clusters of vectors representing words.C. Support Vector</s>
<s>Machine (SVM) as Document classifier For Document classification task, we use Support VectorMachine (SVM) classification algorithm. SVM is a popularsupervised learning algorithm for classification task and manyresearcher attempted to perform document classification taskusing it [30]. Given a set of training documents, each documentmarked with a particular category, an SVM training algorithmbuilds a model that assigns new examples into one of thepredefined categories. Fig. 3. The process of training SVM classifier.Figure 3 shows the process of training SVM classifier usingword clusters as feature. Each row represents a document. Firstcolumn represents the category id. Rest of the columnsrepresents cluster id's and how many words of a particulardocument belong to a certain cluster. The model was trainedusing the default parameters.IV. EXPERIMENTSOur experiment is completed in three steps. Firstly, we stripoff the html tags for all the news crawled from Bengali onlinenewspapers. We use those data to train the Word2vec modeland generate embeddings. Secondly, we apply dimensionreduction technique and K-means clustering on subset ofwords to create clusters. Finally, using word clusters asfeatures, we perform document classification task to evaluatethe word embeddings and clusters.A. Data for Word2vec In general, Word2vec model takes a huge amount of data(typically about 100 billion words) as training text to createaccurate models; but there are not much Bengali data availableonline. We collected online newspaper articles from 13different Bengali newspapers of year 2010 to 2015. Totalnumber of article is 218,5701, totalling 51,920,010 sentences.Most of the sentences contain 5 to 25 words. For the Word2Vecmodel, we only took words which occurred at least 5 times inthe documents, totaling 210,535 words. Figure 3 shows thefrequencies of sentences with various sentence lengths. Fig. 4. Sentence length VS countB. Data for Document Classification To perform document clustering, we collected ~20,000Bengali online newspaper documents, each labeled into itsparticular class. We use 7 general classes like Sports,Entertainment, Politics etc. Overview of the data is given inTable I. We separate 70% document of each class fortraining and 30% document of each class for testing.TABLE I. TOTAL NUMBER OF DOCUMENTS FOR CLUSTERING Class Class name Number of documents0 Sports 22321 Entertainment 26552 Accident and Crime 41363 International 22504 Science & Tech. 29065 Politics 28086 Economics 2718C. Clustering Word Embeddings for Document Classification Using the training data mentioned above, we train ourWord2vec model. We use deeplearning4j1, a javaimplementation of Word2vec model, using default experimentsetup with the context window size 5 and min word frequency5. We created two different models with vector size of 100 and200 for our experiment. Vocabulary size of the final model is210,535. As words are represented as vectors in a Word2vecmodel, that makes each word independent of their contexts. Wecan take any two words and calculate distance or similaritybetween them. That means, we can use the whole corpus or anysubset of words from the corpus to cluster the words by directlyapplying K-means clustering algorithm. But before applying K-means to those word vectors, first we reduce the dimension sizeof the vectors to two, using t-SNE dimension reductiontechnique and then apply K-means. We use R implementationof both the t-SNE2</s>
<s>and K-means3. We create severalexperiments for the different embeddings size and number ofclusters, using K={100, 200, 300, 400, 500, 600}. For eachdocument, we use the number of words in a particular clusteras features to perform SVM classification. We use Scikit-learnlibSVM4, a popular python implementation of SVM to performmulti class classification. We discuss the outcome of ourexperiments in the result sections. V. RESULTSIn order to evaluate the word clusters, three methods can beused, i.e. measuring the internal coherence of clusters,embedding the clusters in an application, or evaluating againsta manually generated answer key [31]. The first method isgenerally used by the clustering algorithms themselves. Thesecond method is especially relevant for applications that candeal with noisy clusters and avoids the need to generate answerkeys specific to the word clustering task. The third methodrequires a gold standard such as WordNet[32] or some otherontological resource. English and a number of other languageshave resources such as WordNet [33][34]. Unfortunately, there1 http://deeplearning4j.org/word2vec2 https://lvdmaaten.github.io/tsne/3 https://stat.ethz.ch/R-manual/R-devel/library/stats/html/kmeans.html4 http://scikit-learn.org/stable/modules/svm.htmlexists no WordNet for Bengali words. In order to evaluate theclusters, we perform a NLP task, which is Bengali DocumentClassification, by using the information of word clusters asfeatures and measured the accuracy of that task.In Figure 4, we show the graph of performance measure usingWord Clusters VS Accuracy, for both the embeddings size of100 and 200, and K={100, 200, 300, 400, 500, 600}. Weachieve our best result, ~91.02% F1-score using K= 600 forboth the embedding sizes of 100 and 200. As we reduce theembedding dimension into two before clustering, we observeno significant effect of the number of actual embeddingdimension on clustering as well as classification task. But, wemust also mention that for the embedding size less than 100,word embeddings did not result in meaningful clusters;meaning contextually unrelated words showed up in the samecluster which resulted in poor classification performance. Wealso observed such problem while using Polyglot for creatingBengali embeddings. Polyglot uses only 64 dimensions for theembeddings which failed to capture the contexts in Bengalilanguage. In Figure 5, we show that word clusters become moremeaningful and accurate when the number of K in K-means islarge. When we cluster the words with relatively small value ofK (K=50), the words of the clusters become generalized forwhich semantic and contextual similarity is hard to relate. Butwhen the value of K is large (K =600), we see more accurateand meaningful clusters are constructed. Fig. 5. Cluster size VS Classification accuracyIn our classification, the news can be classified into sevenclasses. For each of those classes, we measure the precision,recall and f1-score and show in Table II for the experimentalsetup of D=100 and K= 600. Our test data contains 4713documents, which is 30% of the total dataset and separatefrom the training set. Here, the result shows averageprecision of 91%, recall of 90% and F1-score of 91%. TABLE II. CLASSIFICATION REPORT (D = 100, K = 600)Class Precision Recall F1-score Test DocumentSports 0.98 0.94 0.96 528Entertainment 0.93 0.93 0.93 627Accident and Crime 0.92 0.91 0.91 996International 0.90 0.89 0.89 566Science & Tech. 0.91 0.86 0.88 677Politics 0.93 0.87 0.90 653Economics 0.77</s>
<s>0.92 0.84 654avg / total 0.91 0.90 0.91 4713We evaluate our approach with the TREC evaluationtechnique to produce Precision-Recall graph [35]. In Figure6, we show the Precision Recall graph for the experimentmentioned above.Fig. 6. Overall Precision-Recall curve of Document Classification (D = 100,K = 600)In Figure 7, we present Confusion Matrix to elucidate theperformance of our classification model on test dataset. Thefigures show the confusion matrix with normalization by classsupport size.Fig. 7. Normalized Confusion Matrix (D = 100, K = 600)From Figure 7, we can see that, our classifier performedslightly poor for class 4 (International) and class 5 (Science andTech.). We observe that those classes are often confused withclass 6 (Economics). One possible reason for that is, Economiccategory documents often contains common words and topicsof both the Science & Tech. and International news. This is apossible reason for which our system achieved F1-score <95%. Another reason for which we think our system couldperform better is the size of the data for word embeddingtraining. The more data we use to train, the more accurate thevector representations will be, therefore the quality of clusters.Typically, corpus contains 100 billion words for language likeEnglish. Bengali has very few online content compared to that. VI. CONCLUSIONWe demonstrate that Bengali word embeddings can be usedto create word clusters that capture the semantic relationshipof words from the context. We use the clusteringinformation of words as features to perform NLP task likedocument classification. We achieve the performance of~91% as F1-score. We show that we can achieve such aperformance without any preprocessing of the Bengali text.It proves the effectiveness of the word embedding model forperforming the NLP tasks in Bengali. We observed that thelarger the text corpus we use, the better the word clusterscan be formed; so we will collect more Bengali data togenerate the embeddings. We will continue our study tounderstand how can we learn the vector representation foreach word better by studying other existing embeddingmodels: Polyglot, Gensim and Glove. We will also study theeffect of dimension reduction on the documentclassification. We will use our understanding from thosestudy to solve other classification problem like POS tagging,NER and Sentiment Analysis in Bengali.VII. ACKNOWLEDGEMENTThis research was partially supported by Search EnginePipilika, which is a Bengali search engine initiallydeveloped by Shahjalal University of Science & Technology(SUST). We thank Pipilika team, specially Mahbubur RubTalha and Tushar Chakraborty, they provided us newspaperdata that greatly assisted the research.REFERENCES[1] Collobert, Ronan, et al. "Natural language processing (almost) fromscratch." Journal of Machine Learning Research 12.Aug (2011): 2493-2537.[2] Mikolov, T., and J. Dean. "Distributed representations of words andphrases and their compositionality." Advances in neural informationprocessing systems (2013).[3] Pennington, Jeffrey, Richard Socher, and Christopher D. Manning."Glove: Global Vectors for Word Representation." EMNLP. Vol. 14. 2014.[4] Al-Rfou, Rami, Bryan Perozzi, and Steven Skiena. "Polyglot:Distributed word representations for multilingual nlp." arXiv preprintarXiv:1307.1662 (2013).[5] Ali, Md Nawab Yousuf, et al. "Morphological analysis of bangla wordsfor universal networking language." Digital Information Management, 2008.ICDIM 2008. Third International Conference on. IEEE, 2008.[6] Ismail, Sabir, and M. Shahidur Rahman. "Bangla word clustering basedon N-gram language</s>
<s>model." Electrical Engineering and Information &Communication Technology (ICEEICT), 2014 International Conference on.IEEE, 2014.[7] Maaten, Laurens van der, and Geoffrey Hinton. "Visualizing data usingt-SNE." Journal of Machine Learning Research 9.Nov (2008): 2579-2605.[8] Jain, Anil K. "Data clustering: 50 years beyond K-means." Patternrecognition letters 31.8 (2010): 651-666.[9] Cristianini, Nello, and John Shawe-Taylor. An introduction to supportvector machines and other kernel-based learning methods. Cambridgeuniversity press, 2000.[10] Bengio, Y., Ducharme, R., & Vincent, P. (2001). A neural probabilisticlanguage model. NIPS.[11] Yao, Kaisheng, et al. "Recurrent neural networks for languageunderstanding." INTERSPEECH. 2013.[12] Rehurek, Radim, and Petr Sojka. "Software framework for topicmodelling with large corpora." In Proceedings of the LREC 2010 Workshopon New Challenges for NLP Frameworks. 2010.[13] Chakrabarty, A., & Garain, U. (2016). BenLem (A Bengali Lemmatizer)and Its Role in WSD. ACM Transactions on Asian and Low-ResourceLanguage Information Processing ACM Trans. Asian Low-Resour. Lang. Inf.Process., 15(3), 1-18. doi:10.1145/2835494[14] P F Brown, P V Desouza, R L Mercer, V J D Pietra, V J Della. and J CLai. “Class-based N-gram Models of Natural Language”. Computationallinguistics, 18 No: 4, 1992, P: 467-479. [15] Turian, Joseph, Lev Ratinov, and Yoshua Bengio. "Word representations:a simple and general method for semi-supervised learning." Proceedings ofthe 48th annual meeting of the association for computational linguistics.Association for Computational Linguistics, 2010.[16] E E Korkmaz. “A method for improving automatic word categorization”.Doctoral dissertation, Middle East Technical University, 1997 [17] W Ding, H Al-Mubaid and S Kotagiri.“Word classification: Anexperimental approach with Naïve Bayes”. Conference on Computers andTheir Applications, 2009[18] Chakraborty, Tanmoy, Dipankar Das, and Sivaji Bandyopadhyay."Identifying Bengali Multiword Expressions using Semantic Clustering."Lingvisticæ Investigationes 37.1 (2014): 106-128.[19] A. Lopes, R. Pinho, F. V. Paulovich, and R. Minghim, "Visual textmining using association rules," Computers & Graphics, vol. 31, pp. 316-326,2007.[20] F. Sebastiani, "Machine learning in automated text categorization," ACMcomputing surveys (CSUR), vol. 34, pp. 1-47, 2002.[21] T. Denoeux, "A k-nearest neighbor classification rule based onDempster-Shafer theory," Systems, Man and Cybernetics, IEEE Transactionson, vol. 25, pp. 804-813, 1995.[22] Dumais, Susan, et al. "Inductive learning algorithms and representationsfor text categorization." Proceedings of the seventh international conferenceon Information and knowledge management. ACM, 1998.[23] Frank, Eibe, and Remco R. Bouckaert. "Naive bayes for textclassification with unbalanced classes." European Conference on Principles ofData Mining and Knowledge Discovery. Springer Berlin Heidelberg, 2006.[24] Joachims, Thorsten. "Text categorization with support vector machines:Learning with many relevant features." European conference on machinelearning. Springer Berlin Heidelberg, 1998.[25] Peng, Fuchun, and Dale Schuurmans. "Combining naive Bayes and n-gram language models for text classification." European Conference onInformation Retrieval. Springer Berlin Heidelberg, 2003. [26] Mandal, Ashis Kumar, and Rikta Sen. "Supervised Learning Methods forBengali Web Document Categorization." arXiv preprint arXiv:1410.2045(2014).[27] Chy, Abu Nowshed, Md Hanif Seddiqui, and Sowmitra Das. "Banglanews classification using naive Bayes classifier." Computer and InformationTechnology (ICCIT), 2013 16th International Conference on. IEEE, 2014.[28] Kabir, Fasihul, et al. "Bangla text document categorization usingStochastic Gradient Descent (SGD) classifier." Cognitive Computing andInformation Processing (CCIP), 2015 International Conference on. IEEE,2015.[29] Yanhong YUAN, Liming HE, Li PENG. "A New Study Based onWord2vec and Cluster for Document Categorization". Zhixing HUANG,China, Journal of Computational Information Systems</s>
<s>10: 21 (2014) 9301–9308[30] Tong, Simon, and Daphne Koller. "Support vector machine activelearning with applications to text classification." Journal of machine learningresearch 2.Nov (2001): 45-66.[31] Lindén, Krister, and Jussi Olavi Piitulainen. "Discovering synonyms andother related words." Proceedings of COLING 2004 CompuTerm 2004: 3rdInternational Workshop on Computational Terminology. 2004[32] G. A. Miller, R. Beckwith, C. D. Fellbaum, D. Gross, K. Miller. 1990.WordNet: An online lexical database. Int. J. Lexicograph. 3, 4, pp. 235–244.[33] S. Benoît, F. Darja. 2008. Building a free French wordnet frommultilingual resources. In Proc. of Ontolex 2008, Marrakech, Maroc.[34] Pushpak Bhattacharyya, IndoWordNet, Lexical Resources EngineeringConference 2010 (LREC 2010), Malta, May, 2010.[35] E. Voorhees and D. Harman, TREC: Experiment and Evaluation inInformation Retrieval. MIT Press Cambridge, 2005, vol. 63.</s>
<s>untitledSee discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/327820515Automatic Bengali Document Categorization Based on Word Embedding andStatistical Learning ApproachesConference Paper · February 2018DOI: 10.1109/IC4ME2.2018.8465632CITATIONSREADS2 authors:Some of the authors of this publication are also working on these related projects:Text classification using deep learning, Emotion detection from text, handwritten sentence recognition using machine learning, Vision based driving assistance systemView projectIsolation, identification and antibiotic sensitivity pattern of Salmonella spp from locally isolated egg sample View projectRajib HossainBangabandhu Sheikh Mujibur Rahman Science & Technology University19 PUBLICATIONS 39 CITATIONS SEE PROFILEMoshiul HoqueChittagong University of Engineering & Technology72 PUBLICATIONS 210 CITATIONS SEE PROFILEAll content following this page was uploaded by Moshiul Hoque on 04 August 2020.The user has requested enhancement of the downloaded file.https://www.researchgate.net/publication/327820515_Automatic_Bengali_Document_Categorization_Based_on_Word_Embedding_and_Statistical_Learning_Approaches?enrichId=rgreq-f85044dbc9910da932eec6e14d218c3c-XXX&enrichSource=Y292ZXJQYWdlOzMyNzgyMDUxNTtBUzo5MjA4MzE2NzI3MjU1MDVAMTU5NjU1NDc4Nzc4Mg%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/327820515_Automatic_Bengali_Document_Categorization_Based_on_Word_Embedding_and_Statistical_Learning_Approaches?enrichId=rgreq-f85044dbc9910da932eec6e14d218c3c-XXX&enrichSource=Y292ZXJQYWdlOzMyNzgyMDUxNTtBUzo5MjA4MzE2NzI3MjU1MDVAMTU5NjU1NDc4Nzc4Mg%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Text-classification-using-deep-learning-Emotion-detection-from-text-handwritten-sentence-recognition-using-machine-learning-Vision-based-driving-assistance-system?enrichId=rgreq-f85044dbc9910da932eec6e14d218c3c-XXX&enrichSource=Y292ZXJQYWdlOzMyNzgyMDUxNTtBUzo5MjA4MzE2NzI3MjU1MDVAMTU5NjU1NDc4Nzc4Mg%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Isolation-identification-and-antibiotic-sensitivity-pattern-of-Salmonella-spp-from-locally-isolated-egg-sample?enrichId=rgreq-f85044dbc9910da932eec6e14d218c3c-XXX&enrichSource=Y292ZXJQYWdlOzMyNzgyMDUxNTtBUzo5MjA4MzE2NzI3MjU1MDVAMTU5NjU1NDc4Nzc4Mg%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-f85044dbc9910da932eec6e14d218c3c-XXX&enrichSource=Y292ZXJQYWdlOzMyNzgyMDUxNTtBUzo5MjA4MzE2NzI3MjU1MDVAMTU5NjU1NDc4Nzc4Mg%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Rajib_Hossain4?enrichId=rgreq-f85044dbc9910da932eec6e14d218c3c-XXX&enrichSource=Y292ZXJQYWdlOzMyNzgyMDUxNTtBUzo5MjA4MzE2NzI3MjU1MDVAMTU5NjU1NDc4Nzc4Mg%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Rajib_Hossain4?enrichId=rgreq-f85044dbc9910da932eec6e14d218c3c-XXX&enrichSource=Y292ZXJQYWdlOzMyNzgyMDUxNTtBUzo5MjA4MzE2NzI3MjU1MDVAMTU5NjU1NDc4Nzc4Mg%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Bangabandhu_Sheikh_Mujibur_Rahman_Science_Technology_University?enrichId=rgreq-f85044dbc9910da932eec6e14d218c3c-XXX&enrichSource=Y292ZXJQYWdlOzMyNzgyMDUxNTtBUzo5MjA4MzE2NzI3MjU1MDVAMTU5NjU1NDc4Nzc4Mg%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Rajib_Hossain4?enrichId=rgreq-f85044dbc9910da932eec6e14d218c3c-XXX&enrichSource=Y292ZXJQYWdlOzMyNzgyMDUxNTtBUzo5MjA4MzE2NzI3MjU1MDVAMTU5NjU1NDc4Nzc4Mg%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Moshiul_Hoque?enrichId=rgreq-f85044dbc9910da932eec6e14d218c3c-XXX&enrichSource=Y292ZXJQYWdlOzMyNzgyMDUxNTtBUzo5MjA4MzE2NzI3MjU1MDVAMTU5NjU1NDc4Nzc4Mg%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Moshiul_Hoque?enrichId=rgreq-f85044dbc9910da932eec6e14d218c3c-XXX&enrichSource=Y292ZXJQYWdlOzMyNzgyMDUxNTtBUzo5MjA4MzE2NzI3MjU1MDVAMTU5NjU1NDc4Nzc4Mg%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Chittagong_University_of_Engineering_Technology?enrichId=rgreq-f85044dbc9910da932eec6e14d218c3c-XXX&enrichSource=Y292ZXJQYWdlOzMyNzgyMDUxNTtBUzo5MjA4MzE2NzI3MjU1MDVAMTU5NjU1NDc4Nzc4Mg%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Moshiul_Hoque?enrichId=rgreq-f85044dbc9910da932eec6e14d218c3c-XXX&enrichSource=Y292ZXJQYWdlOzMyNzgyMDUxNTtBUzo5MjA4MzE2NzI3MjU1MDVAMTU5NjU1NDc4Nzc4Mg%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Moshiul_Hoque?enrichId=rgreq-f85044dbc9910da932eec6e14d218c3c-XXX&enrichSource=Y292ZXJQYWdlOzMyNzgyMDUxNTtBUzo5MjA4MzE2NzI3MjU1MDVAMTU5NjU1NDc4Nzc4Mg%3D%3D&el=1_x_10&_esc=publicationCoverPdfAutomatic Bengali Document Categorization Based on Word Embedding and Statistical Learning Approaches Md. Rajib Hossain Dept. of Computer Science & Engineering Chittagong University of Engineering & Technology Chittagong, Bangladesh e-mail: rajcsecuet@gmail.com Mohammed Moshiul Hoque Dept. of Computer Science & Engineering Chittagong University of Engineering & Technology Chittagong, Bangladesh e-mail: moshiulh@yahoo.com Abstract—The automated categorization of text documents into predetermined categories has witnessed a growing in the last few years, due to the huge availability of documents in digital form and the ensuing need to organize them. Automatic document categorization is the process of assigning one or more categories or classes to a document, making it easier to manipulate and sort. This paper proposes a Bengali document categorization technique based on word2vec word embedding model and stochastic gradient descent (SGD) statistical learning algorithm with multi-class svm. The semantic features of a document are extracting by Word2Vec and SGD improve the classification complexity with multi-class SVM that classify the unlabeled data. The experimental result with 10000 training and 4651 testing documents shows the 93.33% accuracy. Keywords—Bangla language prcessing, Documents categorization; Word embedding; Machine learing. I. INTRODUCTION In recent years, automatic document categorization has gained much attention by NLP researchers due to the availability of texts in digital form. Document categorization is the task of assigning a text document or a sequence of text documents into a single or multiple predefined categories. A Number of text documents in digital form have grown enormously day by day in size and variety. Therefore, an automatic document categorization system should develop to handle a large amount of text data to organize or sort it easily and quickly. Bangla is spoken by about 245 million people in Bangladesh and two states of India, with being 7th most spoken language in the world [1]. With the popularity of Unicode system and growing use of the Internet, Bangla text documents in digital domain have increased since last few years. Although there are few researches are conducted in the field of Bangla language processing such as syntax analysis, machine translation, optical character recognition and so on, an automatic text document categorization also an importance issue that need to solve. Bangla document categorization may be used in security agency to identify the suspected web contents or spam detection, the daily newspapers to organize</s>
<s>by subject categories, library to classify papers or books medical to categorize patient reports from multiple aspects, using taxonomies of disease categories, and so on. There are many document categorization system developed for English language processing but there is no usable system is developed for Bangla texts. In this work, we purpose is to design a framework to classify Bengali text documents using the word embedeing, SGD, and multi-class SVM. Convolutional neural network [2], Character-level [3] and recurrent neural network [4, 5] have achieved very good result in document classification but it required costly hardware and large dataset for training. Our propose system uses word embedding technique and this embedding text will use for categorization. Word embedding is a process of feature extraction, where each word extracts some feature depending on semantic and syntactic relations. Semantic vector space models of language represent each word with a real-valued vector. There are many famous statistical algorithms for word embedding but GloVe [6] and Word2Vec [7] are state-of-the-art or competitive algorithm. We tuned the hyperparameters of both algorithm and train a big amount of data for Bangla word embedding. Each word in the sentence is projected into embedding vector space by being multiplied with a weight matrix, forming a sequence of dense real-valued vectors. This sequence is then fed into the SGD which processes the word sequence that in turn SVM classify the text. II. RELATED WORK A number of significant researches has conducted on word embedding and document categorization in English language. In recent year, word embedding shown high performance for English and some European language text classification [6, 7]. However, no significant embedding technique is developed for classify Bangla texts. Very few attempts of word embedding techniques are found in Bangla language such as, TF-IDF [1], N-GRAM [8] and lexical [9] approaches. TF-IDF based word embedding technique used only word-count and it shown a low-performance due to lack of feature extraction capability. N-GRAM word embedding is a statistical process in which semantic relation depend on previous N words, as a result, the current word embedding may be diverted to low performance. N-GRAM model is not represents semantic meaning of the whole sentence. Moreover, the lexical feature is not working properly for Bangla language due to its large inflectional diversity in verbs, tense, noun, etc. In a recent work Word2Vec word embedding technique is used for text classification [10], but it accuracy is not good due to the lack of Bangla corpus and hardware support. Krendzelak et al. describe a text categorization system with machine learning and hierarchical structures which used a tree-based Naive Bayesian categorization process [11, 12]. It is a conventional machine learning system which performs low accuracy due to training feature extraction process and training techniques. An unsupervised technique with latent semantic feature and Gaussian mixture model is used for text categorization [13]. Most of the text documents contain a huge number of the sentence and the category name is the just summary of the document, for this reason,</s>
<s>it is really hard to find the relation between text category name and document contents. Text categorization on Turkish language using SVM is proposed which is achieved good accuracy but time complexity is large due to the large feature dimensions [14]. A system for Arabic text categorization is developed using Naive Bays in control environment dataset with a resonable accuracy but it falls due to the unknown data set [15]. In the recent year few researches were conducted using machine learning techniques. Clustering based approach [10, 8] achieved the better result but there are lots of problem with a clustering-based solution. In cluster-based technique, the accuracy depends on a number of the clusters but there are no work is conducted to determine the optimize clusters. Data outlier is another problem of the cluster-based solution, a cluster center may huge change due to outliers, and as a result, the final train model should be overfitting. Bangla web documents categorization based on multiple supervised and unsupervised algorithm apply [1] in here but word embedding based on TI-IDF and classifier take more time for using multiple classifier algorithms. For this reason, its performance and accuracy is very low and cannot use in real time. In our work, we propose a word embedding with spec2vec with SGD based system for Bangla text categorization which expects to overcome the shortcomings of previous work. III. METHODOLOGY We proposed a documents categorization system which trains by Bengali text documents with a supervised algorithm and prepares a classifier model which projected by unladed documents and provide an expected category name. In this training module, we have a newspaper dataset which collects from a different newspaper. Let the training sets { }nxxxxX ,..,,, 321= and its class { }nyyyyC ,..,,, 321= . Where n is the total number of training documents and c the total number of category or class. This module input as a text documents X with label C and output as a sequence of words list. A schematic representation of propose text documents classifier is shown in Fig. 1. Fig. 1. Documents classifier training and projection module. Let =1x Now, the document 1x is projected with the classifier model and expected output will sportsy =1 . A. Word embedding model The main goal of the word embedding is to convert words into numeric value to manipulate an understanding of natural language. In this module, input takes as a one-hot vector for each word and output an embedding feature vector for that word.Word2Vec is a class of algorithm that learns in an unsupervised way to representations of the word vectors which captures semantic relations well. Now we will build a dense vector for each word so that it is easy to predict the other words appearing in the context. The output of this module is a dense vector ( fWw• ).Fig 2 shows the shallow 2-layer neural network word embedding module. Fig. 2. Skip-gram model for the 2-layer shallow network. েসডন পাকর্ , হয্ািমlন। মাহমদুulাহর জনয্ মাঠটা</s>
<s>বিুঝ ভীষণ পয়া! েটেs eকমাt েস ু ির eেসিছল e মােঠi। We define a model that aims to predict between a centers words tw and context words in terms of word vectors. We look at many positions t in a big Bengali language corpus. We keep adjusting the vector representations of words to minimize this loss. We have a large Bengali corpus and each word in the corpus Tt ,...,1= is predicted surrounding words in a window of “radius” m of every word. Objective function ( )θJ ′ : The objective function Maximize the probability of any context word given the current center word. ≠≤≤−+=′t jmjmtwjtwpJ1 0;;|)( θθ (1) Where tw the center word and tjw + is the context word. The Log Likelihood objective function also optimizes the loss function and maximizes the context word probability.  = ≠≤≤−+−=t jmjmtwjtwp1 0;|log)(θ (2) )(θJ is the Log likelihood objective function which maximizes the context words probability. Now predict the surrounding or context words in a window of radius m of every word for ( )tjt wwp |+ the simplest first formulation is:  = w vucop1expexp)|( (3) )|( cop is the context word probability respect to the center word. Where o is the outside (or output) word index c is the center word index cv and u wcenter and outside vectors of indices c and o . Finally, using Softmax word c to obtain the probability of word o . The Softmax function is defined as in eq. (4).  =j eu (4) Where ip is the ith class soft probability numerator that denotes the actual probability and denominator denotes the score normalization. The probability maximization, Log likelihood, context word probability measurements and Softmax function are combined to design a shallow 2-layer neural network. The network input layer feed as one hot vector which projects into the hidden layer and output layer contains the semantic feature corresponding to the input word vector. Input feature function takes an embedding vector per-word and output as concatenate feature vector. B. Extracted Feature Feature extraction is a process of domain transformation. In documents categorization system input domain takes as raw ssstext documents and features extractor system process the raw text and output as a numeric value for each word with a fixed dimension. For each word lookup the embedding model fWw• and concatenate the feature vector. C. SGD and SVM Learning The main objective of the SGD and SVM learning is to propagate the feature vector and collect the distinguishing feature for classifier model. The Multi-class SVM converts to Binary Classification problem using one vs. all (OVA) technique. Due to the large-scale dataset, we used stochastic gradient descent (SGD) for training that reduces the training time with gradient optimization technique. Let the update parameters jθ update with batch size N and each time update the parameters by eq. (5). ( ) xijyixihjj  −−= θαθθ (5) Where jθ the updated gradient, α is the learning rate, xi is the input data, iy is the</s>
<s>ith class label and θh is the hyperparameter. Now the SVM functions each time partition the input data and try to minimize the objective or error function. ( ) ( ) ( ) ( ) ( )yyyyyyL ˆ1log1ˆlogˆ, −−+−= (6) Here ( )yyL ˆ, is a loss function or objective function which reduces the error of SVM function, y is an actual label and ŷ is predicted label. D. Documents classifier model Now the training algorithm generates a model which represent by fk•θ here k denotes the class number and f represent the trained weight feature dimension.Let ix is the unlabeled feature and b is a bias term and projected by model matrix fk•θ . The hyperparameter is given below: ( ) bxifkxih +∗•= θθ (7) Now we got a score vector from equation (7) 921 ,, sss  and get the maximum score obtained by the eq. (8). From equation (8) we determine the expected documents category class. ( )( )xihθmax (8) IV. EXPERIMENTS The whole system executes in GTX 1070 GPU with 32 GB physical memory and core i7 processor. We collected a number of Bengali documents for word embedding and documents classification from web, blogs, newspapers, online books. Table I summarize the statistics of data used for word embedding. TABLE I. WORD EMBEDDING DATA SUMMARY Number of documents 84000 Number of sentences 102096 Total unique words 850400 Word embedding dim 100 In order to categorize the text documents, we collected a handcraft dataset from different online newspaper [16-19]. Table II shows the summary of dataset used for classification. TABLE II.HANDCRAFT CLASSIFIER DATASET SUMMARY Training Testing Number of class 9 9 Number of documents 10000 4651 Average word per documents 60 60 Feature-length per document 600 600 Padding with documents Allowed Allowed In the Table II, the feature length of each document is considered as a floating point value that depends on file size and padding. We develop a documents categorization dataset with 10000 traing and 4651 testing documents. Table III illustrates the summary of data used for training and testing in different document categories. TABLE III shows that crime category consists of large number of documents and environment category consists of smallest number of documents respectively. All data are stored in .txt format. TABLE III. NUMBER OF CATEGORIES USED FOR CLASSIFICATION Category Name Number of training documents Number of testing documents Accident(A) 996 492 Crime(C) 2120 1089 Economics(EC) 850 345 Entertainment(EN) 1400 686 Environment(ENV) 355 40 International(I) 900 412 Politics(P) 659 274 Science_tech(ST) 1150 513 Sports(SP) 1570 800 Total 10000 4651 V. EVALUATION MEASURES In order to evaluate the propose system, we used several evaluation metrics such as, precision, recall, F1-measure, and confusion matrix. • Precision: In the field of documents categorization, precision is the fraction of retrieved documents that are relevant to the query. ReRdprecision= (9) • Recall: In the field of documents categorization, recall is the fraction of the relevant documents that are successfully retrieved. ReRdrecall= (10) Here Rd and Re are the relevant and</s>
<s>retrieved documents. • F1 – measure: A measure that combines precision and recall is the harmonic mean of precision and recall, the traditional F1 measure. recallprecisionrecallprecision1 (11) • Confusion Matrix: In the field of machine learning and specifically the problem of statistical classification, a confusion matrix, also known as an error matrix. Each row of the matrix represents the instances in a predicted class while each column represents the instances in an actual class VI. RESULTS TABLE IV shows the precision, recall, F1-measure and support values in different class. TABLE IV. STATISTICALLY EVALUTION SUMMARY Category Name Precision Recall F1-score Support Accident (A) 0.91 0.93 0.92 492 Crime (C) 0.91 0.95 0.93 1089 Economics (EC) 0.90 0.87 0.88 345 Entertainment (EN) 0.96 0.98 0.97 686 Environment (ENV) 0.96 0.65 0.78 40 International (I) 0.90 0.84 0.87 412 Politics (P) 0.96 0.89 0.93 274 Science_tech (ST) 0.91 0.95 0.93 513 Sports (Sp) 0.99 0.96 0.97 800 Avg/total 0.93 0.93 0.93 4651 Fig. 3 shows the precision vs. recall curve which revealed that the documents categorization system achieved the better AUC result (97.00%). Fig. 3. The precision-recall curve for documents categorization. TABLE V shows confusion matrix for documents categorization with error. Only A (accident) class overlap with C (crime) class. The sports class achieved the best accuracy and the environment class achieved the minimum accuracy. Due to number of training data and data pattern variations are the main reason of the accuracy discriminations. TABLE V.CONFUSION MATRIX Fig. 4 represents the ROC curve for different classes. In Fig. 4 the true positive rate and the false positive rate for different cut-off points of a parameter. Each point on the ROC curve represents a sensitivity or specificity pair corresponding to a particular decision threshold. The area under the ROC curve (AUC) is a measure of how well a parameter can distinguish amongh the classes. We compared accuracy of the propose system with the existing Bangla text classification techniques [1, 11]. Where accuracy means the average accuracy .Table VI summarizes the comparison which shows the propose system is outperforms the existing techniques with higher accuracy 93.33%. Fig. 4. ROC curve for multi-class categorization. TABLE VI. PERFORMANCE COMPARISION VII. CONCLUSION Bangla text document classification is an important research issues in Bangla language processing. In recent year, due to the huge availability of Bangla texts in digital form we need to develop an automatic classification system to manage or organize these texts. In this paper, we propose Bangla text classification system using machine learning technique. Semantic feature of Bangla input texts is extracted using Word2Vec algorithm and developed a documents categorization system using multi class SVM with SGD. The proposed system is tested using author generated dataset and compare with the existing technique which shown the better performance. We will consider more classes with larger datasets in future that will improve the overall accuracy of the system. References [1] A. K. Mandal and Rikta Sen, “Supervised Learning Methods for Bangla Web Document Categorization,” International Journal of Artificial Intelligence</s>
<s>& Applications (IJAIA), vol. 5, no. 5, pp.93-105, 2014. [2] K. Xu, Y. Feng, S. Huang and D. Zhao, " Semantic Relation Classification via Convolutional Neural Networks with Simple Negative Sampling, “Empirical Methods in Natural Language Processing, pp. 536–540, Lisbon, Portugal, 2015. [3] X. Zhang, J. Zhao, and Y. LeCun, “Character-level Convolution Networks for Text Classification,” Journal of CoRR, 2016. Method No. of training documents No. of testing documents Total class Accuracy (%) TF-IDF+SVM [1] 1000 118 5 89.14 Word2Vec + K-NN + SVM [2] 19705 4713 7 91.02 Proposed 10000 4651 9 93.33 [4] D. Tang, B. Qi, and Ting Liu, “Document Modeling with Gated Recurrent Neural Network for Sentiment Classification,” Empirical Methods in Natural Language Processing, pp.1422–1432, Lisbon, Portugal, 2015. [5] J. Y. Lee and F. Dernoncourt, “Sequential Short-Text Classification with Recurrent and Convolutional Neural Networks,” Journal of CoRR, 2016. [6] J. Pennington, R. Socher and C. D. Manning, “GloVe: Global Vectors for Word Representation,” Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532-1543, 2014. [7] T. Mikolov, K. Chen, G. Corrado and J. Dean, “Efficient Estimation of Word Representations in Vector Space,” Journal of CoRR, 2013. [8] S. Ismail and M. S. Rahman “Bangla Word Clustering Based on N-gram Language Model,” International Conference on Electrical Engineering and Information &Communication Technology (ICEEICT), 2014. [9] Z. Islam, A.Mehler, and R. Rahman, “Text Readability Classification of Textbooks of a Low-Resource Language,” 26th Pacific Asia Conference on Language, Information & Computation, pp 545–553, 2012. [10] A. Ahmad and M. R. Amin, “Bengali Word Embeddings and its Application in Solving Document Classification Problem,” 19th International Conference on Computer and Information Technology, pp.425-430, 2016. [11] M. Krendzelak and F. Jakab, “Text categorization with machine learning and hierarchical structures,” in Proc. of 13th Int. Con. on Emerging eLearning Technologies and Applications, pp.1-5, 2015. [12] A. N. Chy, M.H. Seddiqui, and S. Das, “Bangla News Classification using Naive Bayes classifier,” in Proc. of 16th Int. Conf. Computer & Information Technology, pp. 366-371, 2014. [13] C. Liebeskind, L. Kotlerman and I. Dagan, “Text Categorization from category name in an industry motivated scenario,” Journal of Language Resources and Evaluation, vol. 49, no. 2, pp. 227-261, 2015. [14] M. Kaya, G. Fidan and I. H. Toroslu, "Sentiment Analysis of Turkish Political News," IEEE/WIC/ACM International Joint Conferences on Web Intelligence and Intelligent Agent Technology, vol. 01, pp. 174-180, 2012. [15] S. Alsaleem, "Automated Arabic Text Categorization Using SVM and NB," International Arab Journal of e-Technology, vol. 2, no. 2, June 2011. [16] The Daily Prothom Alo, Online, http://www.prothom-alo.com [17] The Daily Jugantor, Online, https://www.jugantor.com [18] The Daily Ittefaq, Online, http://www.ittefaq.com.bd [19] The Daily Manobkantha, Online, http://www.manobkantha.com View publication statsView publication statshttps://www.researchgate.net/publication/327820515 /ASCII85EncodePages false /AllowTransparency false /AutoPositionEPSFiles true /AutoRotatePages /None /Binding /Left /CalGrayProfile (Gray Gamma 2.2) /CalRGBProfile (sRGB IEC61966-2.1) /CalCMYKProfile (U.S. Web Coated \050SWOP\051 v2) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Error /CompatibilityLevel 1.7 /CompressObjects /Off /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages true /CreateJobTicket false /DefaultRenderingIntent /Default /DetectBlends true /DetectCurves 0.0000 /ColorConversionStrategy /LeaveColorUnchanged /DoThumbnails false /EmbedAllFonts true /EmbedOpenType false /ParseICCProfilesInComments true</s>
<s>/EmbedJobOptions true /DSCReportingLevel 0 /EmitDSCWarnings false /EndPage -1 /ImageMemory 1048576 /LockDistillerParams true /MaxSubsetPct 100 /Optimize true /OPM 0 /ParseDSCComments false /ParseDSCCommentsForDocInfo true /PreserveCopyPage true /PreserveDICMYKValues true /PreserveEPSInfo false /PreserveFlatness true /PreserveHalftoneInfo true /PreserveOPIComments false /PreserveOverprintSettings true /StartPage 1 /SubsetFonts true /TransferFunctionInfo /Remove /UCRandBGInfo /Preserve /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true /AbadiMT-CondensedLight /ACaslon-Italic /ACaslon-Regular /ACaslon-Semibold /ACaslon-SemiboldItalic /AdobeArabic-Bold /AdobeArabic-BoldItalic /AdobeArabic-Italic /AdobeArabic-Regular /AdobeHebrew-Bold /AdobeHebrew-BoldItalic /AdobeHebrew-Italic /AdobeHebrew-Regular /AdobeHeitiStd-Regular /AdobeMingStd-Light /AdobeMyungjoStd-Medium /AdobePiStd /AdobeSongStd-Light /AdobeThai-Bold /AdobeThai-BoldItalic /AdobeThai-Italic /AdobeThai-Regular /AGaramond-Bold /AGaramond-BoldItalic /AGaramond-Italic /AGaramond-Regular /AGaramond-Semibold /AGaramond-SemiboldItalic /AgencyFB-Bold /AgencyFB-Reg /AGOldFace-Outline /AharoniBold /Algerian /Americana /Americana-ExtraBold /AndaleMono /AndaleMonoIPA /AngsanaNew /AngsanaNew-Bold /AngsanaNew-BoldItalic /AngsanaNew-Italic /AngsanaUPC /AngsanaUPC-Bold /AngsanaUPC-BoldItalic /AngsanaUPC-Italic /Anna /ArialAlternative /ArialAlternativeSymbol /Arial-Black /Arial-BlackItalic /Arial-BoldItalicMT /Arial-BoldMT /Arial-ItalicMT /ArialMT /ArialMT-Black /ArialNarrow /ArialNarrow-Bold /ArialNarrow-BoldItalic /ArialNarrow-Italic /ArialRoundedMTBold /ArialUnicodeMS /ArrusBT-Bold /ArrusBT-BoldItalic /ArrusBT-Italic /ArrusBT-Roman /AvantGarde-Book /AvantGarde-BookOblique /AvantGarde-Demi /AvantGarde-DemiOblique /AvantGardeITCbyBT-Book /AvantGardeITCbyBT-BookOblique /BakerSignet /BankGothicBT-Medium /Barmeno-Bold /Barmeno-ExtraBold /Barmeno-Medium /Barmeno-Regular /Baskerville /BaskervilleBE-Italic /BaskervilleBE-Medium /BaskervilleBE-MediumItalic /BaskervilleBE-Regular /Baskerville-Bold /Baskerville-BoldItalic /Baskerville-Italic /BaskOldFace /Batang /BatangChe /Bauhaus93 /Bellevue /BellMT /BellMTBold /BellMTItalic /BerlingAntiqua-Bold /BerlingAntiqua-BoldItalic /BerlingAntiqua-Italic /BerlingAntiqua-Roman /BerlinSansFB-Bold /BerlinSansFBDemi-Bold /BerlinSansFB-Reg /BernardMT-Condensed /BernhardModernBT-Bold /BernhardModernBT-BoldItalic /BernhardModernBT-Italic /BernhardModernBT-Roman /BiffoMT /BinnerD /BinnerGothic /BlackadderITC-Regular /Blackoak /blex /blsy /Bodoni /Bodoni-Bold /Bodoni-BoldItalic /Bodoni-Italic /BodoniMT /BodoniMTBlack /BodoniMTBlack-Italic /BodoniMT-Bold /BodoniMT-BoldItalic /BodoniMTCondensed /BodoniMTCondensed-Bold /BodoniMTCondensed-BoldItalic /BodoniMTCondensed-Italic /BodoniMT-Italic /BodoniMTPosterCompressed /Bodoni-Poster /Bodoni-PosterCompressed /BookAntiqua /BookAntiqua-Bold /BookAntiqua-BoldItalic /BookAntiqua-Italic /Bookman-Demi /Bookman-DemiItalic /Bookman-Light /Bookman-LightItalic /BookmanOldStyle /BookmanOldStyle-Bold /BookmanOldStyle-BoldItalic /BookmanOldStyle-Italic /BookshelfSymbolOne-Regular /BookshelfSymbolSeven /BookshelfSymbolThree-Regular /BookshelfSymbolTwo-Regular /Botanical /Boton-Italic /Boton-Medium /Boton-MediumItalic /Boton-Regular /Boulevard /BradleyHandITC /Braggadocio /BritannicBold /Broadway /BrowalliaNew /BrowalliaNew-Bold /BrowalliaNew-BoldItalic /BrowalliaNew-Italic /BrowalliaUPC /BrowalliaUPC-Bold /BrowalliaUPC-BoldItalic /BrowalliaUPC-Italic /BrushScript /BrushScriptMT /CaflischScript-Bold /CaflischScript-Regular /Calibri /Calibri-Bold /Calibri-BoldItalic /Calibri-Italic /CalifornianFB-Bold /CalifornianFB-Italic /CalifornianFB-Reg /CalisMTBol /CalistoMT /CalistoMT-BoldItalic /CalistoMT-Italic /Cambria /Cambria-Bold /Cambria-BoldItalic /Cambria-Italic /CambriaMath /Candara /Candara-Bold /Candara-BoldItalic /Candara-Italic /Carta /CaslonOpenfaceBT-Regular /Castellar /CastellarMT /Centaur /Centaur-Italic /Century /CenturyGothic /CenturyGothic-Bold /CenturyGothic-BoldItalic /CenturyGothic-Italic /CenturySchL-Bold /CenturySchL-BoldItal /CenturySchL-Ital /CenturySchL-Roma /CenturySchoolbook /CenturySchoolbook-Bold /CenturySchoolbook-BoldItalic /CenturySchoolbook-Italic /CGTimes-Bold /CGTimes-BoldItalic /CGTimes-Italic /CGTimes-Regular /CharterBT-Bold /CharterBT-BoldItalic /CharterBT-Italic /CharterBT-Roman /CheltenhamITCbyBT-Bold /CheltenhamITCbyBT-BoldItalic /CheltenhamITCbyBT-Book /CheltenhamITCbyBT-BookItalic /Chiller-Regular /Cmb10 /CMB10 /Cmbsy10 /CMBSY10 /CMBSY5 /CMBSY6 /CMBSY7 /CMBSY8 /CMBSY9 /Cmbx10 /CMBX10 /Cmbx12 /CMBX12 /Cmbx5 /CMBX5 /Cmbx6 /CMBX6 /Cmbx7 /CMBX7 /Cmbx8 /CMBX8 /Cmbx9 /CMBX9 /Cmbxsl10 /CMBXSL10 /Cmbxti10 /CMBXTI10 /Cmcsc10 /CMCSC10 /Cmcsc8 /CMCSC8 /Cmcsc9 /CMCSC9 /Cmdunh10 /CMDUNH10 /Cmex10 /CMEX10 /CMEX7 /CMEX8 /CMEX9 /Cmff10 /CMFF10 /Cmfi10 /CMFI10 /Cmfib8 /CMFIB8 /Cminch /CMINCH /Cmitt10 /CMITT10 /Cmmi10 /CMMI10 /Cmmi12 /CMMI12 /Cmmi5 /CMMI5 /Cmmi6 /CMMI6 /Cmmi7 /CMMI7 /Cmmi8 /CMMI8 /Cmmi9 /CMMI9 /Cmmib10 /CMMIB10 /CMMIB5 /CMMIB6 /CMMIB7 /CMMIB8 /CMMIB9 /Cmr10 /CMR10 /Cmr12 /CMR12 /Cmr17 /CMR17 /Cmr5 /CMR5 /Cmr6 /CMR6 /Cmr7 /CMR7 /Cmr8 /CMR8 /Cmr9 /CMR9 /Cmsl10 /CMSL10 /Cmsl12 /CMSL12 /Cmsl8 /CMSL8 /Cmsl9 /CMSL9 /Cmsltt10 /CMSLTT10 /Cmss10 /CMSS10 /Cmss12 /CMSS12 /Cmss17 /CMSS17 /Cmss8 /CMSS8 /Cmss9 /CMSS9 /Cmssbx10 /CMSSBX10 /Cmssdc10 /CMSSDC10 /Cmssi10 /CMSSI10 /Cmssi12 /CMSSI12 /Cmssi17 /CMSSI17 /Cmssi8 /CMSSI8 /Cmssi9 /CMSSI9 /Cmssq8 /CMSSQ8 /Cmssqi8 /CMSSQI8 /Cmsy10 /CMSY10 /Cmsy5 /CMSY5 /Cmsy6 /CMSY6 /Cmsy7 /CMSY7 /Cmsy8 /CMSY8 /Cmsy9 /CMSY9 /Cmtcsc10 /CMTCSC10 /Cmtex10 /CMTEX10 /Cmtex8 /CMTEX8 /Cmtex9 /CMTEX9 /Cmti10 /CMTI10 /Cmti12 /CMTI12 /Cmti7 /CMTI7 /Cmti8 /CMTI8 /Cmti9 /CMTI9 /Cmtt10 /CMTT10 /Cmtt12 /CMTT12 /Cmtt8 /CMTT8 /Cmtt9 /CMTT9 /Cmu10 /CMU10 /Cmvtt10 /CMVTT10 /ColonnaMT /Colossalis-Bold /ComicSansMS /ComicSansMS-Bold /Consolas /Consolas-Bold /Consolas-BoldItalic /Consolas-Italic /Constantia /Constantia-Bold /Constantia-BoldItalic /Constantia-Italic /CooperBlack /CopperplateGothic-Bold /CopperplateGothic-Light /Copperplate-ThirtyThreeBC /Corbel /Corbel-Bold /Corbel-BoldItalic /Corbel-Italic /CordiaNew /CordiaNew-Bold /CordiaNew-BoldItalic /CordiaNew-Italic /CordiaUPC /CordiaUPC-Bold /CordiaUPC-BoldItalic /CordiaUPC-Italic /Courier /Courier-Bold /Courier-BoldOblique /CourierNewPS-BoldItalicMT /CourierNewPS-BoldMT /CourierNewPS-ItalicMT /CourierNewPSMT /Courier-Oblique /CourierStd /CourierStd-Bold /CourierStd-BoldOblique /CourierStd-Oblique /CourierX-Bold /CourierX-BoldOblique /CourierX-Oblique /CourierX-Regular /CreepyRegular /CurlzMT /David-Bold /David-Reg /DavidTransparent /Dcb10 /Dcbx10 /Dcbxsl10 /Dcbxti10 /Dccsc10 /Dcitt10 /Dcr10 /Desdemona /DilleniaUPC /DilleniaUPCBold /DilleniaUPCBoldItalic /DilleniaUPCItalic /Dingbats /DomCasual</s>
<s>/Dotum /DotumChe /DoulosSIL /EdwardianScriptITC /Elephant-Italic /Elephant-Regular /EngraversGothicBT-Regular /EngraversMT /EraserDust /ErasITC-Bold /ErasITC-Demi /ErasITC-Light /ErasITC-Medium /ErieBlackPSMT /ErieLightPSMT /EriePSMT /EstrangeloEdessa /Euclid /Euclid-Bold /Euclid-BoldItalic /EuclidExtra /EuclidExtra-Bold /EuclidFraktur /EuclidFraktur-Bold /Euclid-Italic /EuclidMathOne /EuclidMathOne-Bold /EuclidMathTwo /EuclidMathTwo-Bold /EuclidSymbol /EuclidSymbol-Bold /EuclidSymbol-BoldItalic /EuclidSymbol-Italic /EucrosiaUPC /EucrosiaUPCBold /EucrosiaUPCBoldItalic /EucrosiaUPCItalic /EUEX10 /EUEX7 /EUEX8 /EUEX9 /EUFB10 /EUFB5 /EUFB7 /EUFM10 /EUFM5 /EUFM7 /EURB10 /EURB5 /EURB7 /EURM10 /EURM5 /EURM7 /EuroMono-Bold /EuroMono-BoldItalic /EuroMono-Italic /EuroMono-Regular /EuroSans-Bold /EuroSans-BoldItalic /EuroSans-Italic /EuroSans-Regular /EuroSerif-Bold /EuroSerif-BoldItalic /EuroSerif-Italic /EuroSerif-Regular /EUSB10 /EUSB5 /EUSB7 /EUSM10 /EUSM5 /EUSM7 /FelixTitlingMT /Fences /FencesPlain /FigaroMT /FixedMiriamTransparent /FootlightMTLight /Formata-Italic /Formata-Medium /Formata-MediumItalic /Formata-Regular /ForteMT /FranklinGothic-Book /FranklinGothic-BookItalic /FranklinGothic-Demi /FranklinGothic-DemiCond /FranklinGothic-DemiItalic /FranklinGothic-Heavy /FranklinGothic-HeavyItalic /FranklinGothicITCbyBT-Book /FranklinGothicITCbyBT-BookItal /FranklinGothicITCbyBT-Demi /FranklinGothicITCbyBT-DemiItal /FranklinGothic-Medium /FranklinGothic-MediumCond /FranklinGothic-MediumItalic /FrankRuehl /FreesiaUPC /FreesiaUPCBold /FreesiaUPCBoldItalic /FreesiaUPCItalic /FreestyleScript-Regular /FrenchScriptMT /Frutiger-Black /Frutiger-BlackCn /Frutiger-BlackItalic /Frutiger-Bold /Frutiger-BoldCn /Frutiger-BoldItalic /Frutiger-Cn /Frutiger-ExtraBlackCn /Frutiger-Italic /Frutiger-Light /Frutiger-LightCn /Frutiger-LightItalic /Frutiger-Roman /Frutiger-UltraBlack /Futura-Bold /Futura-BoldOblique /Futura-Book /Futura-BookOblique /FuturaBT-Bold /FuturaBT-BoldItalic /FuturaBT-Book /FuturaBT-BookItalic /FuturaBT-Medium /FuturaBT-MediumItalic /Futura-Light /Futura-LightOblique /GalliardITCbyBT-Bold /GalliardITCbyBT-BoldItalic /GalliardITCbyBT-Italic /GalliardITCbyBT-Roman /Garamond /Garamond-Bold /Garamond-BoldCondensed /Garamond-BoldCondensedItalic /Garamond-BoldItalic /Garamond-BookCondensed /Garamond-BookCondensedItalic /Garamond-Italic /Garamond-LightCondensed /Garamond-LightCondensedItalic /Gautami /GeometricSlab703BT-Light /GeometricSlab703BT-LightItalic /Georgia /Georgia-Bold /Georgia-BoldItalic /Georgia-Italic /GeorgiaRef /Giddyup /Giddyup-Thangs /Gigi-Regular /GillSans /GillSans-Bold /GillSans-BoldItalic /GillSans-Condensed /GillSans-CondensedBold /GillSans-Italic /GillSans-Light /GillSans-LightItalic /GillSansMT /GillSansMT-Bold /GillSansMT-BoldItalic /GillSansMT-Condensed /GillSansMT-ExtraCondensedBold /GillSansMT-Italic /GillSans-UltraBold /GillSans-UltraBoldCondensed /GloucesterMT-ExtraCondensed /Gothic-Thirteen /GoudyOldStyleBT-Bold /GoudyOldStyleBT-BoldItalic /GoudyOldStyleBT-Italic /GoudyOldStyleBT-Roman /GoudyOldStyleT-Bold /GoudyOldStyleT-Italic /GoudyOldStyleT-Regular /GoudyStout /GoudyTextMT-LombardicCapitals /GSIDefaultSymbols /Gulim /GulimChe /Gungsuh /GungsuhChe /Haettenschweiler /HarlowSolid /Harrington /Helvetica /Helvetica-Black /Helvetica-BlackOblique /Helvetica-Bold /Helvetica-BoldOblique /Helvetica-Condensed /Helvetica-Condensed-Black /Helvetica-Condensed-BlackObl /Helvetica-Condensed-Bold /Helvetica-Condensed-BoldObl /Helvetica-Condensed-Light /Helvetica-Condensed-LightObl /Helvetica-Condensed-Oblique /Helvetica-Fraction /Helvetica-Narrow /Helvetica-Narrow-Bold /Helvetica-Narrow-BoldOblique /Helvetica-Narrow-Oblique /Helvetica-Oblique /HighTowerText-Italic /HighTowerText-Reg /Humanist521BT-BoldCondensed /Humanist521BT-Light /Humanist521BT-LightItalic /Humanist521BT-RomanCondensed /Imago-ExtraBold /Impact /ImprintMT-Shadow /InformalRoman-Regular /IrisUPC /IrisUPCBold /IrisUPCBoldItalic /IrisUPCItalic /Ironwood /ItcEras-Medium /ItcKabel-Bold /ItcKabel-Book /ItcKabel-Demi /ItcKabel-Medium /ItcKabel-Ultra /JasmineUPC /JasmineUPC-Bold /JasmineUPC-BoldItalic /JasmineUPC-Italic /JoannaMT /JoannaMT-Italic /Jokerman-Regular /JuiceITC-Regular /Kartika /Kaufmann /KaufmannBT-Bold /KaufmannBT-Regular /KidTYPEPaint /KinoMT /KodchiangUPC /KodchiangUPC-Bold /KodchiangUPC-BoldItalic /KodchiangUPC-Italic /KorinnaITCbyBT-Regular /KristenITC-Regular /KrutiDev040Bold /KrutiDev040BoldItalic /KrutiDev040Condensed /KrutiDev040Italic /KrutiDev040Thin /KrutiDev040Wide /KrutiDev060 /KrutiDev060Bold /KrutiDev060BoldItalic /KrutiDev060Condensed /KrutiDev060Italic /KrutiDev060Thin /KrutiDev060Wide /KrutiDev070 /KrutiDev070Condensed /KrutiDev070Italic /KrutiDev070Thin /KrutiDev070Wide /KrutiDev080 /KrutiDev080Condensed /KrutiDev080Italic /KrutiDev080Wide /KrutiDev090 /KrutiDev090Bold /KrutiDev090BoldItalic /KrutiDev090Condensed /KrutiDev090Italic /KrutiDev090Thin /KrutiDev090Wide /KrutiDev100 /KrutiDev100Bold /KrutiDev100BoldItalic /KrutiDev100Condensed /KrutiDev100Italic /KrutiDev100Thin /KrutiDev100Wide /KrutiDev120 /KrutiDev120Condensed /KrutiDev120Thin /KrutiDev120Wide /KrutiDev130 /KrutiDev130Condensed /KrutiDev130Thin /KrutiDev130Wide /KunstlerScript /Latha /LatinWide /LetterGothic /LetterGothic-Bold /LetterGothic-BoldOblique /LetterGothic-BoldSlanted /LetterGothicMT /LetterGothicMT-Bold /LetterGothicMT-BoldOblique /LetterGothicMT-Oblique /LetterGothic-Slanted /LevenimMT /LevenimMTBold /LilyUPC /LilyUPCBold /LilyUPCBoldItalic /LilyUPCItalic /Lithos-Black /Lithos-Regular /LotusWPBox-Roman /LotusWPIcon-Roman /LotusWPIntA-Roman /LotusWPIntB-Roman /LotusWPType-Roman /LucidaBright /LucidaBright-Demi /LucidaBright-DemiItalic /LucidaBright-Italic /LucidaCalligraphy-Italic /LucidaConsole /LucidaFax /LucidaFax-Demi /LucidaFax-DemiItalic /LucidaFax-Italic /LucidaHandwriting-Italic /LucidaSans /LucidaSans-Demi /LucidaSans-DemiItalic /LucidaSans-Italic /LucidaSans-Typewriter /LucidaSans-TypewriterBold /LucidaSans-TypewriterBoldOblique /LucidaSans-TypewriterOblique /LucidaSansUnicode /Lydian /Magneto-Bold /MaiandraGD-Regular /Mangal-Regular /Map-Symbols /MathA /MathB /MathC /Mathematica1 /Mathematica1-Bold /Mathematica1Mono /Mathematica1Mono-Bold /Mathematica2 /Mathematica2-Bold /Mathematica2Mono /Mathematica2Mono-Bold /Mathematica3 /Mathematica3-Bold /Mathematica3Mono /Mathematica3Mono-Bold /Mathematica4 /Mathematica4-Bold /Mathematica4Mono /Mathematica4Mono-Bold /Mathematica5 /Mathematica5-Bold /Mathematica5Mono /Mathematica5Mono-Bold /Mathematica6 /Mathematica6Bold /Mathematica6Mono /Mathematica6MonoBold /Mathematica7 /Mathematica7Bold /Mathematica7Mono /Mathematica7MonoBold /MatisseITC-Regular /MaturaMTScriptCapitals /Mesquite /Mezz-Black /Mezz-Regular /MICR /MicrosoftSansSerif /MingLiU /Minion-BoldCondensed /Minion-BoldCondensedItalic /Minion-Condensed /Minion-CondensedItalic /Minion-Ornaments /MinionPro-Bold /MinionPro-BoldIt /MinionPro-It /MinionPro-Regular /Miriam /MiriamFixed /MiriamTransparent /Mistral /Modern-Regular /MonotypeCorsiva /MonotypeSorts /MSAM10 /MSAM5 /MSAM6 /MSAM7 /MSAM8 /MSAM9 /MSBM10 /MSBM5 /MSBM6 /MSBM7 /MSBM8 /MSBM9 /MS-Gothic /MSHei /MSLineDrawPSMT /MS-Mincho /MSOutlook /MS-PGothic /MS-PMincho /MSReference1 /MSReference2 /MSReferenceSansSerif /MSReferenceSansSerif-Bold /MSReferenceSansSerif-BoldItalic /MSReferenceSansSerif-Italic /MSReferenceSerif /MSReferenceSerif-Bold /MSReferenceSerif-BoldItalic /MSReferenceSerif-Italic /MSReferenceSpecialty /MSSong /MS-UIGothic /MT-Extra /MTExtraTiger /MT-Symbol /MT-Symbol-Italic /MVBoli /Myriad-Bold /Myriad-BoldItalic /Myriad-Italic /Myriad-Roman /Narkisim /NewCenturySchlbk-Bold /NewCenturySchlbk-BoldItalic /NewCenturySchlbk-Italic /NewCenturySchlbk-Roman /NewMilleniumSchlbk-BoldItalicSH /NewsGothic /NewsGothic-Bold /NewsGothicBT-Bold /NewsGothicBT-BoldItalic /NewsGothicBT-Italic /NewsGothicBT-Roman /NewsGothic-Condensed /NewsGothic-Italic /NewsGothicMT /NewsGothicMT-Bold /NewsGothicMT-Italic /NiagaraEngraved-Reg /NiagaraSolid-Reg /NimbusMonL-Bold /NimbusMonL-BoldObli /NimbusMonL-Regu /NimbusMonL-ReguObli /NimbusRomNo9L-Medi /NimbusRomNo9L-MediItal /NimbusRomNo9L-Regu /NimbusRomNo9L-ReguItal /NimbusSanL-Bold /NimbusSanL-BoldCond /NimbusSanL-BoldCondItal /NimbusSanL-BoldItal /NimbusSanL-Regu /NimbusSanL-ReguCond /NimbusSanL-ReguCondItal /NimbusSanL-ReguItal /Nimrod /Nimrod-Bold /Nimrod-BoldItalic /Nimrod-Italic /NSimSun /Nueva-BoldExtended /Nueva-BoldExtendedItalic /Nueva-Italic /Nueva-Roman /NuptialScript /OCRA /OCRA-Alternate /OCRAExtended /OCRB /OCRB-Alternate /OfficinaSans-Bold /OfficinaSans-BoldItalic /OfficinaSans-Book /OfficinaSans-BookItalic /OfficinaSerif-Bold /OfficinaSerif-BoldItalic /OfficinaSerif-Book /OfficinaSerif-BookItalic /OldEnglishTextMT /Onyx /OnyxBT-Regular</s>
<s>/OzHandicraftBT-Roman /PalaceScriptMT /Palatino-Bold /Palatino-BoldItalic /Palatino-Italic /PalatinoLinotype-Bold /PalatinoLinotype-BoldItalic /PalatinoLinotype-Italic /PalatinoLinotype-Roman /Palatino-Roman /PapyrusPlain /Papyrus-Regular /Parchment-Regular /Parisian /ParkAvenue /Penumbra-SemiboldFlare /Penumbra-SemiboldSans /Penumbra-SemiboldSerif /PepitaMT /Perpetua /Perpetua-Bold /Perpetua-BoldItalic /Perpetua-Italic /PerpetuaTitlingMT-Bold /PerpetuaTitlingMT-Light /PhotinaCasualBlack /Playbill /PMingLiU /Poetica-SuppOrnaments /PoorRichard-Regular /PopplLaudatio-Italic /PopplLaudatio-Medium /PopplLaudatio-MediumItalic /PopplLaudatio-Regular /PrestigeElite /Pristina-Regular /PTBarnumBT-Regular /Raavi /RageItalic /Ravie /RefSpecialty /Ribbon131BT-Bold /Rockwell /Rockwell-Bold /Rockwell-BoldItalic /Rockwell-Condensed /Rockwell-CondensedBold /Rockwell-ExtraBold /Rockwell-Italic /Rockwell-Light /Rockwell-LightItalic /Rod /RodTransparent /RunicMT-Condensed /Sanvito-Light /Sanvito-Roman /ScriptC /ScriptMTBold /SegoeUI /SegoeUI-Bold /SegoeUI-BoldItalic /SegoeUI-Italic /Serpentine-BoldOblique /ShelleyVolanteBT-Regular /ShowcardGothic-Reg /Shruti /SILDoulosIPA /SimHei /SimSun /SimSun-PUA /SnapITC-Regular /StandardSymL /Stencil /StoneSans /StoneSans-Bold /StoneSans-BoldItalic /StoneSans-Italic /StoneSans-Semibold /StoneSans-SemiboldItalic /Stop /Swiss721BT-BlackExtended /Sylfaen /Symbol /SymbolMT /SymbolTiger /SymbolTigerExpert /Tahoma /Tahoma-Bold /Tci1 /Tci1Bold /Tci1BoldItalic /Tci1Italic /Tci2 /Tci2Bold /Tci2BoldItalic /Tci2Italic /Tci3 /Tci3Bold /Tci3BoldItalic /Tci3Italic /Tci4 /Tci4Bold /Tci4BoldItalic /Tci4Italic /TechnicalItalic /TechnicalPlain /Tekton /Tekton-Bold /TektonMM /Tempo-HeavyCondensed /Tempo-HeavyCondensedItalic /TempusSansITC /Tiger /TigerExpert /Times-Bold /Times-BoldItalic /Times-BoldItalicOsF /Times-BoldSC /Times-ExtraBold /Times-Italic /Times-ItalicOsF /TimesNewRomanMT-ExtraBold /TimesNewRomanPS-BoldItalicMT /TimesNewRomanPS-BoldMT /TimesNewRomanPS-ItalicMT /TimesNewRomanPSMT /Times-Roman /Times-RomanSC /Trajan-Bold /Trebuchet-BoldItalic /TrebuchetMS /TrebuchetMS-Bold /TrebuchetMS-Italic /Tunga-Regular /TwCenMT-Bold /TwCenMT-BoldItalic /TwCenMT-Condensed /TwCenMT-CondensedBold /TwCenMT-CondensedExtraBold /TwCenMT-CondensedMedium /TwCenMT-Italic /TwCenMT-Regular /Univers-Bold /Univers-BoldItalic /UniversCondensed-Bold /UniversCondensed-BoldItalic /UniversCondensed-Medium /UniversCondensed-MediumItalic /Univers-Medium /Univers-MediumItalic /URWBookmanL-DemiBold /URWBookmanL-DemiBoldItal /URWBookmanL-Ligh /URWBookmanL-LighItal /URWChanceryL-MediItal /URWGothicL-Book /URWGothicL-BookObli /URWGothicL-Demi /URWGothicL-DemiObli /URWPalladioL-Bold /URWPalladioL-BoldItal /URWPalladioL-Ital /URWPalladioL-Roma /USPSBarCode /VAGRounded-Black /VAGRounded-Bold /VAGRounded-Light /VAGRounded-Thin /Verdana /Verdana-Bold /Verdana-BoldItalic /Verdana-Italic /VerdanaRef /VinerHandITC /Viva-BoldExtraExtended /Vivaldii /Viva-LightCondensed /Viva-Regular /VladimirScript /Vrinda /Webdings /Westminster /Willow /Wingdings2 /Wingdings3 /Wingdings-Regular /WNCYB10 /WNCYI10 /WNCYR10 /WNCYSC10 /WNCYSS10 /WoodtypeOrnaments-One /WoodtypeOrnaments-Two /WP-ArabicScriptSihafa /WP-ArabicSihafa /WP-BoxDrawing /WP-CyrillicA /WP-CyrillicB /WP-GreekCentury /WP-GreekCourier /WP-GreekHelve /WP-HebrewDavid /WP-IconicSymbolsA /WP-IconicSymbolsB /WP-Japanese /WP-MathA /WP-MathB /WP-MathExtendedA /WP-MathExtendedB /WP-MultinationalAHelve /WP-MultinationalARoman /WP-MultinationalBCourier /WP-MultinationalBHelve /WP-MultinationalBRoman /WP-MultinationalCourier /WP-Phonetic /WPTypographicSymbols /XYATIP10 /XYBSQL10 /XYBTIP10 /XYCIRC10 /XYCMAT10 /XYCMBT10 /XYDASH10 /XYEUAT10 /XYEUBT10 /ZapfChancery-MediumItalic /ZapfDingbats /ZapfHumanist601BT-Bold /ZapfHumanist601BT-BoldItalic /ZapfHumanist601BT-Demi /ZapfHumanist601BT-DemiItalic /ZapfHumanist601BT-Italic /ZapfHumanist601BT-Roman /ZWAdobeF /NeverEmbed [ true /AntiAliasColorImages false /CropColorImages true /ColorImageMinResolution 150 /ColorImageMinResolutionPolicy /OK /DownsampleColorImages true /ColorImageDownsampleType /Bicubic /ColorImageResolution 300 /ColorImageDepth -1 /ColorImageMinDownsampleDepth 1 /ColorImageDownsampleThreshold 2.00333 /EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages true /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /ColorImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 150 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 2.00333 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /GrayImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 1200 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 600 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.00167 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1 /AllowPSXObjects false /CheckCompliance [ /None /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXOutputIntentProfile (None) /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False /CreateJDFFile false /Description << /ARA <FEFF06270633062A062E062F0645002006470630064700200627064406250639062F0627062F0627062A002006440625064606340627062100200648062B062706260642002000410064006F00620065002000500044004600200645062A064806270641064206290020064506390020064506420627064A064A0633002006390631063600200648063706280627063906290020062706440648062B0627062606420020062706440645062A062F062706480644062900200641064A00200645062C062706440627062A002006270644062306390645062706440020062706440645062E062A064406410629061B0020064A06450643064600200641062A062D00200648062B0627062606420020005000440046002006270644064506460634062306290020062806270633062A062E062F062706450020004100630072006F0062006100740020064800410064006F006200650020005200650061006400650072002006250635062F0627063100200035002E0030002006480627064406250635062F062706310627062A0020062706440623062D062F062B002E> /CHS <FEFF4f7f75288fd94e9b8bbe5b9a521b5efa7684002000410064006f006200650020005000440046002065876863900275284e8e55464e1a65876863768467e5770b548c62535370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c676562535f00521b5efa768400200050004400460020658768633002> /CHT <FEFF4f7f752890194e9b8a2d7f6e5efa7acb7684002000410064006f006200650020005000440046002065874ef69069752865bc666e901a554652d965874ef6768467e5770b548c52175370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c4f86958b555f5df25efa7acb76840020005000440046002065874ef63002> /CZE <FEFF005400610074006f0020006e006100730074006100760065006e00ed00200070006f0075017e0069006a007400650020006b0020007600790074007600e101590065006e00ed00200064006f006b0075006d0065006e0074016f002000410064006f006200650020005000440046002000760068006f0064006e00fd00630068002000700072006f002000730070006f006c00650068006c0069007600e90020007a006f006200720061007a006f007600e1006e00ed002000610020007400690073006b0020006f006200630068006f0064006e00ed0063006800200064006f006b0075006d0065006e0074016f002e002000200056007900740076006f01590065006e00e900200064006f006b0075006d0065006e007400790020005000440046002000620075006400650020006d006f017e006e00e90020006f007400650076015900ed007400200076002000700072006f006700720061006d0065006300680020004100630072006f00620061007400200061002000410064006f00620065002000520065006100640065007200200035002e0030002000610020006e006f0076011b006a016100ed00630068002e> /DAN <FEFF004200720075006700200069006e0064007300740069006c006c0069006e006700650072006e0065002000740069006c0020006100740020006f007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400650072002c0020006400650072002000650067006e006500720020007300690067002000740069006c00200064006500740061006c006a006500720065007400200073006b00e60072006d007600690073006e0069006e00670020006f00670020007500640073006b007200690076006e0069006e006700200061006600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020004400650020006f007000720065007400740065006400650020005000440046002d0064006f006b0075006d0065006e0074006500720020006b0061006e002000e50062006e00650073002000690020004100630072006f00620061007400200065006c006c006500720020004100630072006f006200610074002000520065006100640065007200200035002e00300020006f00670020006e0079006500720065002e> /DEU <FEFF00560065007200770065006e00640065006e0020005300690065002000640069006500730065002000450069006e007300740065006c006c0075006e00670065006e0020007a0075006d002000450072007300740065006c006c0065006e00200076006f006e002000410064006f006200650020005000440046002d0044006f006b0075006d0065006e00740065006e002c00200075006d002000650069006e00650020007a0075007600650072006c00e40073007300690067006500200041006e007a006500690067006500200075006e00640020004100750073006700610062006500200076006f006e00200047006500730063006800e40066007400730064006f006b0075006d0065006e00740065006e0020007a0075002000650072007a00690065006c0065006e002e00200044006900650020005000440046002d0044006f006b0075006d0065006e007400650020006b00f6006e006e0065006e0020006d006900740020004100630072006f00620061007400200075006e0064002000520065006100640065007200200035002e003000200075006e00640020006800f600680065007200200067006500f600660066006e00650074002000770065007200640065006e002e> /ESP <FEFF005500740069006c0069006300650020006500730074006100200063006f006e0066006900670075007200610063006900f3006e0020007000610072006100200063007200650061007200200064006f00630075006d0065006e0074006f0073002000640065002000410064006f00620065002000500044004600200061006400650063007500610064006f007300200070006100720061002000760069007300750061006c0069007a00610063006900f3006e0020006500200069006d0070007200650073006900f3006e00200064006500200063006f006e006600690061006e007a006100200064006500200064006f00630075006d0065006e0074006f007300200063006f006d00650072006300690061006c00650073002e002000530065002000700075006500640065006e00200061006200720069007200200064006f00630075006d0065006e0074006f00730020005000440046002000630072006500610064006f007300200063006f006e0020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e003000200079002000760065007200730069006f006e0065007300200070006f00730074006500720069006f007200650073002e> /FRA <FEFF005500740069006c006900730065007a00200063006500730020006f007000740069006f006e00730020006100660069006e00200064006500200063007200e900650072002000640065007300200064006f00630075006d0065006e00740073002000410064006f006200650020005000440046002000700072006f00660065007300730069006f006e006e0065006c007300200066006900610062006c0065007300200070006f007500720020006c0061002000760069007300750061006c00690073006100740069006f006e0020006500740020006c00270069006d007000720065007300730069006f006e002e0020004c0065007300200064006f00630075006d0065006e00740073002000500044004600200063007200e900e90073002000700065007500760065006e0074002000ea0074007200650020006f007500760065007200740073002000640061006e00730020004100630072006f006200610074002c002000610069006e00730069002000710075002700410064006f00620065002000520065006100640065007200200035002e0030002000650074002000760065007200730069006f006e007300200075006c007400e90072006900650075007200650073002e> /GRE <FEFF03a703c103b703c303b903bc03bf03c003bf03b903ae03c303c403b5002003b103c503c403ad03c2002003c403b903c2002003c103c503b803bc03af03c303b503b903c2002003b303b903b1002003bd03b1002003b403b703bc03b903bf03c503c103b303ae03c303b503c403b5002003ad03b303b303c103b103c603b1002000410064006f006200650020005000440046002003ba03b103c403ac03bb03bb03b703bb03b1002003b303b903b1002003b103be03b903cc03c003b903c303c403b7002003c003c103bf03b203bf03bb03ae002003ba03b103b9002003b503ba03c403cd03c003c903c303b7002003b503c003b903c703b503b903c103b703bc03b103c403b903ba03ce03bd002003b503b303b303c103ac03c603c903bd002e0020002003a403b10020005000440046002003ad03b303b303c103b103c603b1002003c003bf03c5002003ad03c703b503c403b5002003b403b703bc03b903bf03c503c103b303ae03c303b503b9002003bc03c003bf03c103bf03cd03bd002003bd03b1002003b103bd03bf03b903c703c403bf03cd03bd002003bc03b5002003c403bf0020004100630072006f006200610074002c002003c403bf002000410064006f00620065002000520065006100640065007200200035002e0030002003ba03b103b9002003bc03b503c403b103b303b503bd03ad03c303c403b503c103b503c2002003b503ba03b403cc03c303b503b903c2002e> /HEB <FEFF05D405E905EA05DE05E905D5002005D105D405D205D305E805D505EA002005D005DC05D4002005DB05D305D9002005DC05D905E605D505E8002005DE05E105DE05DB05D9002000410064006F006200650020005000440046002005E205D105D505E8002005D405E605D205D4002005D505D405D305E405E105D4002005D005DE05D905E005D4002005E905DC002005DE05E105DE05DB05D905DD002005E205E105E705D905D905DD002E002005DE05E105DE05DB05D90020005000440046002005E905E005D505E605E805D5002005E005D905EA05E005D905DD002005DC05E405EA05D905D705D4002005D105D005DE05E605E205D505EA0020004100630072006F006200610074002005D5002D00410064006F00620065002000520065006100640065007200200035002E0030002005D505D205E805E105D005D505EA002005DE05EA05E705D305DE05D505EA002005D905D505EA05E8002E05D905D505EA05E8002E002D0033002C002005E205D905D905E005D5002005D105DE05D305E805D905DA002005DC05DE05E905EA05DE05E9002005E905DC0020004100630072006F006200610074002E002005DE05E105DE05DB05D90020005000440046002005E905E005D505E605E805D5002005E005D905EA05E005D905DD002005DC05E405EA05D905D705D4002005D105D005DE05E605E205D505EA0020004100630072006F006200610074002005D5002D00410064006F00620065002000520065006100640065007200200035002E0030002005D505D205E805E105D005D505EA002005DE05EA05E705D305DE05D505EA002005D905D505EA05E8002E> /HRV (Za stvaranje Adobe PDF dokumenata pogodnih za pouzdani prikaz i ispis poslovnih dokumenata koristite ove postavke. Stvoreni PDF dokumenti mogu se otvoriti Acrobat i Adobe Reader 5.0 i kasnijim verzijama.)</s>
<s>/HUN <FEFF00410020006800690076006100740061006c006f007300200064006f006b0075006d0065006e00740075006d006f006b0020006d00650067006200ed007a00680061007400f30020006d0065006700740065006b0069006e007400e9007300e900720065002000e900730020006e0079006f006d00740061007400e1007300e10072006100200073007a00e1006e0074002000410064006f00620065002000500044004600200064006f006b0075006d0065006e00740075006d006f006b0061007400200065007a0065006b006b0065006c0020006100200062006500e1006c006c00ed007400e10073006f006b006b0061006c00200068006f007a006800610074006a00610020006c00e9007400720065002e0020002000410020006c00e90074007200650068006f007a006f00740074002000500044004600200064006f006b0075006d0065006e00740075006d006f006b00200061007a0020004100630072006f006200610074002000e9007300200061007a002000410064006f00620065002000520065006100640065007200200035002e0030002c0020007600610067007900200061007a002000610074007400f3006c0020006b00e9007301510062006200690020007600650072007a006900f3006b006b0061006c0020006e00790069007400680061007400f3006b0020006d00650067002e> /ITA (Utilizzare queste impostazioni per creare documenti Adobe PDF adatti per visualizzare e stampare documenti aziendali in modo affidabile. I documenti PDF creati possono essere aperti con Acrobat e Adobe Reader 5.0 e versioni successive.) /JPN <FEFF30d330b830cd30b9658766f8306e8868793a304a3088307353705237306b90693057305f002000410064006f0062006500200050004400460020658766f8306e4f5c6210306b4f7f75283057307e305930023053306e8a2d5b9a30674f5c62103055308c305f0020005000440046002030d530a130a430eb306f3001004100630072006f0062006100740020304a30883073002000410064006f00620065002000520065006100640065007200200035002e003000204ee5964d3067958b304f30533068304c3067304d307e305930023053306e8a2d5b9a3067306f30d530a930f330c8306e57cb30818fbc307f3092884c3044307e30593002> /KOR <FEFFc7740020c124c815c7440020c0acc6a9d558c5ec0020be44c988b2c8c2a40020bb38c11cb97c0020c548c815c801c73cb85c0020bcf4ace00020c778c1c4d558b2940020b3700020ac00c7a50020c801d569d55c002000410064006f0062006500200050004400460020bb38c11cb97c0020c791c131d569b2c8b2e4002e0020c774b807ac8c0020c791c131b41c00200050004400460020bb38c11cb2940020004100630072006f0062006100740020bc0f002000410064006f00620065002000520065006100640065007200200035002e00300020c774c0c1c5d0c11c0020c5f40020c2180020c788c2b5b2c8b2e4002e> /NLD (Gebruik deze instellingen om Adobe PDF-documenten te maken waarmee zakelijke documenten betrouwbaar kunnen worden weergegeven en afgedrukt. De gemaakte PDF-documenten kunnen worden geopend met Acrobat en Adobe Reader 5.0 en hoger.) /NOR <FEFF004200720075006b00200064006900730073006500200069006e006e007300740069006c006c0069006e00670065006e0065002000740069006c002000e50020006f0070007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e00740065007200200073006f006d002000650072002000650067006e0065007400200066006f00720020007000e5006c006900740065006c006900670020007600690073006e0069006e00670020006f00670020007500740073006b007200690066007400200061007600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020005000440046002d0064006f006b0075006d0065006e00740065006e00650020006b0061006e002000e50070006e00650073002000690020004100630072006f00620061007400200065006c006c00650072002000410064006f00620065002000520065006100640065007200200035002e003000200065006c006c00650072002e> /POL <FEFF0055007300740061007700690065006e0069006100200064006f002000740077006f0072007a0065006e0069006100200064006f006b0075006d0065006e007400f300770020005000440046002000700072007a0065007a006e00610063007a006f006e00790063006800200064006f0020006e00690065007a00610077006f0064006e00650067006f002000770079015b0077006900650074006c0061006e00690061002000690020006400720075006b006f00770061006e0069006100200064006f006b0075006d0065006e007400f300770020006600690072006d006f0077007900630068002e002000200044006f006b0075006d0065006e0074007900200050004400460020006d006f017c006e00610020006f007400770069006500720061010700200077002000700072006f006700720061006d006900650020004100630072006f00620061007400200069002000410064006f00620065002000520065006100640065007200200035002e0030002000690020006e006f00770073007a0079006d002e> /PTB <FEFF005500740069006c0069007a006500200065007300730061007300200063006f006e00660069006700750072006100e700f50065007300200064006500200066006f0072006d00610020006100200063007200690061007200200064006f00630075006d0065006e0074006f0073002000410064006f00620065002000500044004600200061006400650071007500610064006f00730020007000610072006100200061002000760069007300750061006c0069007a006100e700e3006f002000650020006100200069006d0070007200650073007300e3006f00200063006f006e0066006900e1007600650069007300200064006500200064006f00630075006d0065006e0074006f007300200063006f006d0065007200630069006100690073002e0020004f007300200064006f00630075006d0065006e0074006f00730020005000440046002000630072006900610064006f007300200070006f00640065006d0020007300650072002000610062006500720074006f007300200063006f006d0020006f0020004100630072006f006200610074002000650020006f002000410064006f00620065002000520065006100640065007200200035002e0030002000650020007600650072007300f50065007300200070006f00730074006500720069006f007200650073002e> /RUM <FEFF005500740069006c0069007a00610163006900200061006300650073007400650020007300650074010300720069002000700065006e007400720075002000610020006300720065006100200064006f00630075006d0065006e00740065002000410064006f006200650020005000440046002000610064006500630076006100740065002000700065006e007400720075002000760069007a00750061006c0069007a00610072006500610020015f006900200074006900700103007200690072006500610020006c0061002000630061006c006900740061007400650020007300750070006500720069006f0061007201030020006100200064006f00630075006d0065006e00740065006c006f007200200064006500200061006600610063006500720069002e002000200044006f00630075006d0065006e00740065006c00650020005000440046002000630072006500610074006500200070006f00740020006600690020006400650073006300680069007300650020006300750020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e00300020015f00690020007600650072007300690075006e0069006c006500200075006c0074006500720069006f006100720065002e> /RUS <FEFF04180441043f043e043b044c04370443043904420435002004340430043d043d044b04350020043d0430044104420440043e0439043a043800200434043b044f00200441043e043704340430043d0438044f00200434043e043a0443043c0435043d0442043e0432002000410064006f006200650020005000440046002c0020043f043e04340445043e0434044f04490438044500200434043b044f0020043d0430043404350436043d043e0433043e0020043f0440043e0441043c043e044204400430002004380020043f04350447043004420438002004340435043b043e0432044b044500200434043e043a0443043c0435043d0442043e0432002e002000200421043e043704340430043d043d044b04350020005000440046002d0434043e043a0443043c0435043d0442044b0020043c043e0436043d043e0020043e0442043a0440044b043204300442044c002004410020043f043e043c043e0449044c044e0020004100630072006f00620061007400200438002000410064006f00620065002000520065006100640065007200200035002e00300020043800200431043e043b043504350020043f043e04370434043d043804450020043204350440044104380439002e> /SLV <FEFF005400650020006e006100730074006100760069007400760065002000750070006f0072006100620069007400650020007a00610020007500730074007600610072006a0061006e006a006500200064006f006b0075006d0065006e0074006f0076002000410064006f006200650020005000440046002c0020007000720069006d00650072006e006900680020007a00610020007a0061006e00650073006c006a00690076006f0020006f0067006c00650064006f00760061006e006a006500200069006e0020007400690073006b0061006e006a006500200070006f0073006c006f0076006e0069006800200064006f006b0075006d0065006e0074006f0076002e00200020005500730074007600610072006a0065006e006500200064006f006b0075006d0065006e0074006500200050004400460020006a00650020006d006f0067006f010d00650020006f0064007000720065007400690020007a0020004100630072006f00620061007400200069006e002000410064006f00620065002000520065006100640065007200200035002e003000200069006e0020006e006f00760065006a01610069006d002e> /SUO <FEFF004b00e40079007400e40020006e00e40069007400e4002000610073006500740075006b007300690061002c0020006b0075006e0020006c0075006f0074002000410064006f0062006500200050004400460020002d0064006f006b0075006d0065006e007400740065006a0061002c0020006a006f0074006b006100200073006f0070006900760061007400200079007200690074007900730061007300690061006b00690072006a006f006a0065006e0020006c0075006f00740065007400740061007600610061006e0020006e00e400790074007400e4006d0069007300650065006e0020006a0061002000740075006c006f007300740061006d0069007300650065006e002e0020004c0075006f0064007500740020005000440046002d0064006f006b0075006d0065006e00740069007400200076006f0069006400610061006e0020006100760061007400610020004100630072006f0062006100740069006c006c00610020006a0061002000410064006f00620065002000520065006100640065007200200035002e0030003a006c006c00610020006a006100200075007500640065006d006d0069006c006c0061002e> /SVE <FEFF0041006e007600e4006e00640020006400650020006800e4007200200069006e0073007400e4006c006c006e0069006e006700610072006e00610020006f006d002000640075002000760069006c006c00200073006b006100700061002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400200073006f006d00200070006100730073006100720020006600f60072002000740069006c006c006600f60072006c00690074006c006900670020007600690073006e0069006e00670020006f006300680020007500740073006b007200690066007400650072002000610076002000610066006600e4007200730064006f006b0075006d0065006e0074002e002000200053006b006100700061006400650020005000440046002d0064006f006b0075006d0065006e00740020006b0061006e002000f600700070006e00610073002000690020004100630072006f0062006100740020006f00630068002000410064006f00620065002000520065006100640065007200200035002e00300020006f00630068002000730065006e006100720065002e> /TUR <FEFF005400690063006100720069002000620065006c00670065006c006500720069006e0020006700fc00760065006e0069006c0069007200200062006900720020015f0065006b0069006c006400650020006700f6007200fc006e007400fc006c0065006e006d006500730069002000760065002000790061007a0064013100720131006c006d006100730131006e006100200075007900670075006e002000410064006f006200650020005000440046002000620065006c00670065006c0065007200690020006f006c0075015f007400750072006d0061006b0020006900e70069006e00200062007500200061007900610072006c0061007201310020006b0075006c006c0061006e0131006e002e00200020004f006c0075015f0074007500720075006c0061006e0020005000440046002000620065006c00670065006c0065007200690020004100630072006f006200610074002000760065002000410064006f00620065002000520065006100640065007200200035002e003000200076006500200073006f006e0072006100730131006e00640061006b00690020007300fc007200fc006d006c00650072006c00650020006100e70131006c006100620069006c00690072002e> /ENU (Use these settings to create Adobe PDF documents suitable for reliable viewing and printing of business documents. Created PDF documents can be opened with Acrobat and Adobe Reader 5.0 and later.)>> setdistillerparams /HWResolution [600 600] /PageSize [612.000 792.000]>> setpagedevice</s>
<s>See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/303543436Detection of word error position and correction using reversed worddictionaryConference Paper · January 1998CITATIONSREADS2 authors, including:Some of the authors of this publication are also working on these related projects:A Basic OCR system for Meetei Mayek script Based on Assamese script OCR View projectPhD Thesis View projectBidyut Baran ChaudhuriIndian Statistical Institute361 PUBLICATIONS 9,987 CITATIONS SEE PROFILEAll content following this page was uploaded by Bidyut Baran Chaudhuri on 26 May 2016.The user has requested enhancement of the downloaded file.https://www.researchgate.net/publication/303543436_Detection_of_word_error_position_and_correction_using_reversed_word_dictionary?enrichId=rgreq-c4aaa6a60e07d64facfad94c7a0703cc-XXX&enrichSource=Y292ZXJQYWdlOzMwMzU0MzQzNjtBUzozNjYwOTExNjg2OTgzNzBAMTQ2NDI5NDM0NTAyNQ%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/303543436_Detection_of_word_error_position_and_correction_using_reversed_word_dictionary?enrichId=rgreq-c4aaa6a60e07d64facfad94c7a0703cc-XXX&enrichSource=Y292ZXJQYWdlOzMwMzU0MzQzNjtBUzozNjYwOTExNjg2OTgzNzBAMTQ2NDI5NDM0NTAyNQ%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/project/A-Basic-OCR-system-for-Meetei-Mayek-script-Based-on-Assamese-script-OCR?enrichId=rgreq-c4aaa6a60e07d64facfad94c7a0703cc-XXX&enrichSource=Y292ZXJQYWdlOzMwMzU0MzQzNjtBUzozNjYwOTExNjg2OTgzNzBAMTQ2NDI5NDM0NTAyNQ%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/project/PhD-Thesis-165?enrichId=rgreq-c4aaa6a60e07d64facfad94c7a0703cc-XXX&enrichSource=Y292ZXJQYWdlOzMwMzU0MzQzNjtBUzozNjYwOTExNjg2OTgzNzBAMTQ2NDI5NDM0NTAyNQ%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-c4aaa6a60e07d64facfad94c7a0703cc-XXX&enrichSource=Y292ZXJQYWdlOzMwMzU0MzQzNjtBUzozNjYwOTExNjg2OTgzNzBAMTQ2NDI5NDM0NTAyNQ%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Bidyut_Chaudhuri?enrichId=rgreq-c4aaa6a60e07d64facfad94c7a0703cc-XXX&enrichSource=Y292ZXJQYWdlOzMwMzU0MzQzNjtBUzozNjYwOTExNjg2OTgzNzBAMTQ2NDI5NDM0NTAyNQ%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Bidyut_Chaudhuri?enrichId=rgreq-c4aaa6a60e07d64facfad94c7a0703cc-XXX&enrichSource=Y292ZXJQYWdlOzMwMzU0MzQzNjtBUzozNjYwOTExNjg2OTgzNzBAMTQ2NDI5NDM0NTAyNQ%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Indian_Statistical_Institute?enrichId=rgreq-c4aaa6a60e07d64facfad94c7a0703cc-XXX&enrichSource=Y292ZXJQYWdlOzMwMzU0MzQzNjtBUzozNjYwOTExNjg2OTgzNzBAMTQ2NDI5NDM0NTAyNQ%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Bidyut_Chaudhuri?enrichId=rgreq-c4aaa6a60e07d64facfad94c7a0703cc-XXX&enrichSource=Y292ZXJQYWdlOzMwMzU0MzQzNjtBUzozNjYwOTExNjg2OTgzNzBAMTQ2NDI5NDM0NTAyNQ%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Bidyut_Chaudhuri?enrichId=rgreq-c4aaa6a60e07d64facfad94c7a0703cc-XXX&enrichSource=Y292ZXJQYWdlOzMwMzU0MzQzNjtBUzozNjYwOTExNjg2OTgzNzBAMTQ2NDI5NDM0NTAyNQ%3D%3D&el=1_x_10&_esc=publicationCoverPdfView publication statsView publication statshttps://www.researchgate.net/publication/303543436</s>
<s>Algorithm for Bengali Keyword ExtractionInternational Conference on Bangla Speech and Language Processing(ICBSLP), 21-22 September, 2018Algorithm for Bengali Keyword ExtractionMd. Ruhul Amin∗, Madhusodan Chakraborty∗∗ Search Engine Pipilika, Department of CSE∗Shahjalal University of Science & TechnologySylhet, Bangladesh{shajib.sust, opuchakraborty}@gmail.comAbstract—We present algorithm for keyword extraction froma Bengali document. In natural language processing (NLP),keyword extraction is the automated process to identify a setof terms that represent the information discussed in a docu-ment. A lot of research works have been done for keywordextraction in resource rich languages. Some of those worksfollowed supervised approach using specific corpus whereas thelatest techniques use unsupervised approach. Keyword extractionprocedure already achieved state-of-the-art performance for theresource rich languages. Only a few works have been done onthe keyword extraction for documents in Bengali but none ofthem could achieve > 70% accuracy. In this article, we discussthe methods for extracting Bengali keywords from a specificdocument collection following unsupervised learning approach.Generally, Bengali keyword extraction is difficult in terms ofwords parsing, stemming, excluding stop words etc. The accuracyof those modules also impact the performance of the keywordextraction procedure. However, we obtained 87% accuracy toidentify the correct Bengali keywords from a document. Theprocedure we have discussed for keyword extraction can also beapplied to any language; but here we have provided all of ourexperimental results specifically for Bengali language.Index Terms—Term frequency, Inverse document frequency,Co-occurrence matrix, Chi-square distribution and Tseng’s key-word extraction algorithmI. INTRODUCTIONKeyword extraction algorithm spontaneously recognizes aset of terms that best summarize the topics discussed in adocument. Keyword identification is extremely important inthe field of Text Mining, Information Retrieval and NLP. Insearching process, keywords are widely used to categorizesearch results which help users to find specific data quickly.Keywords are also used for document representation in clas-sification task.Keyword extraction algorithms have been studied exten-sively for more than five decades. Those studies can be dividedinto four broad sections:• corpus based keyword extraction• linguistics feature based keyword extraction• statistical approaches for keyword extraction• language model based keyword extractionDespite wide applicability and research, automatic keywordextraction process suffers from poor performance in resourcepoor languages. To alleviate this situation, this paper discussthe statistical and algorithmic approach to identify keywordsform individual document.This paper is organized as follows. In section 2, we pointout different keyword extraction algorithms studied in lastfive decades; in section 3, we explain the details of ourproposed methods; in section 4, we show the results ofkeyword extraction from Bengali document. In section 5 and6, we evaluate the proposed algorithms.II. DIFFERENT KEYWORD EXTRACTIONAPPROACHESIn this section, we discuss the related works of keywordextraction research that achieved important milestones:A. Corpus based Keyword ExtractionThis approach requires a large collection of word/phrasesand their count across thousands of documents. These algo-rithms contrast the frequency of a particular word from adocument against the distribution of word frequencies acrossall the documents in the corpus to compute the significanceof the respective word [1] [2]. But it is very hard to build acorpus for resource poor language to fulfill the objective.B. Linguistics Feature based Keyword ExtractionThis approach use the linguistics feature of the wordsbased on it’s use in the sentences and documents</s>
<s>[3] [4][5]. For example, the noun phrases can be considered as themost common sources of keywords in a document. But thelinguistic feature annotation, such as: tokenisation, parts ofspeech tagging, lemmatisation and dependency parsing etc arenot very well studied in resource poor language.C. Statistical Approach for Keyword ExtractionThis approach comprises simple methods which are lan-guage and domain-independent. The statistics of the words ina document, such as: n-gram statistics, word frequency, TF-IDF, word co-occurrences, etc are used for keyword extraction.Most of these statistical methods also require a wealth ofdocument collection [6].978-1-5386-8207-4/18/$31.00 ©2018 IEEEAuthorized licensed use limited to: Cornell University Library. Downloaded on September 04,2020 at 03:21:32 UTC from IEEE Xplore. Restrictions apply. D. Language Model based Keyword ExtractionThis approach involves the use of word representation withrespect to it’s context for keyword extraction. These algorithmsrank words/phrases based on the probability distribution ofwords with respect to the context of given document. Notto mention that computing the language models require alarge collection of documents from various sources; hence notsuitable for resource poor languages [7].As most of the above approaches require a large number ofdocuments, we did not follow the footsteps of those respectiveresearchers. We describe the methods for keywords extractionfrom individual document in the next section. As we didnot come across any Bengali keyword identification algorithmthat has > 70% accuracy, we consider those discussion isnot mention worthy. We will compare our results with thekeyword identification methods applied for English language.Among the methods we described above, some are supervisedand others are unsupervised learning. Supervised methods usepreviously complied corpus and a set of predefined keywords.As in Bengali we do not have any corpus of curated words,we follow unsupervised approach using statistical measures.III. PROPOSED METHODMatsuo and Ishizuka [8] applied a chi-square measure tocompute the significance of words and phrases based on theco-occurrences within the sentences in a particular document.The chi-square measure, which determines the bias of wordco-occurrences in the document, is used to rank words andphrases as keywords of the document. As the chi-squaremeasure is language independent and it can be computedfrom the word co-occurrences of the given document, we usethis procedure for extracting significant terms from document.Those words are then used by Tseng’s keyword extractionalgorithm that repeatedly merges back nearby words based onthree simple merging, dropping and accepting rules to generatekeywords [9].The proposed methods not only focus on the terms thatare important but also consider term biasness with importantterms. It then suggests the most likely keywords of documentthrough filtering process. Here we discuss important terms andexplain their usability and reasons in our procedure. We definethe important terms below and then provide the algorithm forkeyword extraction.A. TF-IDFTF-IDF is defined as the product of term frequency andinverse document frequency for a specific term in a givendocument [10]. It represents the importance of a word in thegiven document with respect to the whole corpus. So, for aspecific term t and a particular document d, the term frequencywill be,TF (t, d) =(1)f = number of times term t occurs in document d.T = total number of different terms in document d.Inverse</s>
<s>document frequency of a term can be measuredby calculating the total number of documents in collectiondivided by the number of documents in which the term occursand taking the logarithm of this result. For a specific term tand a particular document d, the inverse document frequencywill beIDF (t, d) = log(2)Where, N = total number of documents.n = number of documents in which term t occurs.In our methods firstly, we calculate TF-IDF of differentterms and then sort them in descending order according toTF-IDF score and collect 30% of them to generate a set ofimportant terms.TF − IDF (t, d) = TF (t, d)× IDF (t, d) (3)Where, d ε D (all documents)B. Chi-square DistributionChi-square (χ2) distribution [11] is used to find the co-occurrence bias between a term and an important term. It canbe computed based on the co-occurrence frequency of twoadjacent words. In our approach, we use co-occurrence matrixfor computing the relevancy between two terms.Now, if probability distribution of co-occurrences, betweenterm t and the important terms I (based on TF-IDF scorediscussed above), is biased to a particular subset of importantterms, then term t is likely to be a keyword. The statisticalvalue of χ2 for a term t can be defined as,χ2 =iεI((freq(t, i)− (nt × pi))2nt × pi(4)Where,I = Set of all important terms according to their TF-IDF.i = an important term from set I , (iεI)nt = total number of co-occurrence of term t and ipi = expected probability of ifreq(t, i) = frequency of co-occurrence of term t and iHere, if the chi-square value of a term exceeds criticalvalue according to degree of freedom and the significancelevel then that term is transferred to Tseng’s keywordextraction algorithm for suggesting multiple terms keywords.C. Tseng’s Keyword Extraction AlgorithmTseng’s keyword extraction method assumes that a docu-ment concentrating on a topic is likely to mention a set ofterms a number of times. Maximally repeated terms in thetext are thus extracted as keyword candidates. We describethe algorithm below:Authorized licensed use limited to: Cornell University Library. Downloaded on September 04,2020 at 03:21:32 UTC from IEEE Xplore. Restrictions apply. Algorithm 1 Tseng’s Keyword Extraction Algorithm1: Calculate inverse document frequency (IDF) of all differ-ent term’s root form.2: for each individual document to entire document size do3: Calculate TF-IDF value of all different terms in indi-vidual document.4: Sort all different terms according to TF-IDF score indescending order.5: Collect top 30% TF-IDF scoring terms and considerthem as important terms.6: Compute co-occurrence matrix of filtered terms.7: Measure chi-square distribution for all different terms.8: Collect the terms that support the null hypothesis ofbeing keyword using chi-square test.9: Generate multiple terms keyword suggestions from sin-gle terms by repeatedly merging nearby words basedon Tseng’s merging, dropping and accepting rules.10: Remove the keywords with lower TF-IDF.11: Sort the suggested keywords with respect to their fre-quency in the document.12: Use filtering process and take the high frequency key-words as final keywords.13: end forIV. KEYWORD IDENTIFICATIONFirstly, we generate IDF value for the root forms of all termsfor the documents in collection. To find the root form for</s>
<s>eachterm, we use a Bengali stemming procedure which has its owndictionary list. We calculate the TF for each term as well as itsroot form. Then we find out TF-IDF value and save this valueagainst the original term as well as its root form. Here we useone assumption which is any keyword whether in root form ornot will get the same TF value. The reason for this assumptionis key phrases are not always in root form. Hence for a betterphrase detection, we need to analyze both the original and rootform of a word using chi-square test.Fig. 1. Important terms from a documentIn our experiment, we consider 1000 documents related topolitical news from online newspaper www.prothom-alo.com.From these documents, the IDF value of each unique termhas been calculated. We take a single document from whichterms with top 30% TF-IDF score are taken as important termsset (Figure 1). The snippet of this document along with theexperimental results of keywords extraction is shown in theappendix at the end of this paper.Also, we define the TF-IDF threshold value as the lowestscore of the top 30% important terms. Later during keywordidentification, we will remove all the candidate keywordsunder this threshold.Hence, TF-IDF threshold = The least TF-IDF score ofimportant termsFor our experiment, the number of classes is the total importantterms and number of restriction is 1 because we assume thatat least one of the important terms can bias a term.Hence, Degree of freedom = Important terms list size-1After threshold calculation chi-square distributions for dif-ferent terms are calculated. To measure the chi-square distri-bution, firstly we generate a co-occurrence matrix. Using theco-occurrence matrix and the important terms, the chi-squarevalue for each term is calculated. In our approach, asymmetricco-occurrence matrix is used where the co-occur frequencyresides in cell (t1, t2) defines the frequency of occurrence ofone term t2 after another term t1 (Figure 2).Fig. 2. Co-occurrence matrixHere In our experiment, a (NXM) co-occurrence matrixis used to calculate co-occurrence frequency where N = alldifferent terms, and M = all important terms. Now, using co-occurrence matrix, we can calculate the chi-square distributionfor all the terms with respect to the important terms. Thefollowing table shows the chi-square value of some importantterms (Figure 3).Fig. 3. Chi-square value of termsAuthorized licensed use limited to: Cornell University Library. Downloaded on September 04,2020 at 03:21:32 UTC from IEEE Xplore. Restrictions apply. After chi-square test, we send the terms with their TF valuein Tseng’s keyword extraction algorithm which will generatecandidate keywords. To choose the best possible keywordsfrom this candidate set, we follow the following filtering steps:1) Using Bengali stop word list for removing terms whichare stop words.2) Remove keywords under TF-IDF threshold.3) Consider single term keyword in root form.4) For multiple terms keyword, consider last term in rootform.After Tseng’s keyword extraction algorithm and filteringprocess, the result is shown in the following Figure 4.Fig. 4. Keyword listV. PERFORMANCE ANALYSISHere, we take a small collection of text documents whichare articles of Prothom Alo newspaper as our documentcollection for testing the procedure. It is a collection of1000 text files. For the example</s>
<s>document (see appendix) wedefine the keywords manually and compare them with ourextracted keywords. The terms used for this calculation arethe following:• True Positive: Keyword detected as Keyword• True Negative: Not Keyword and not detected as Key-word• False Positive: Not Keyword but detected as Keyword• False Negative: Keyword but not detected as KeywordFor measuring the performance of our described method, weuse some known measurement here. We show the Precision,Recall, Accuracy and F-Measure for the generated result. TableI and II shows the performance of keyword extraction methodfor the example document where actual keywords are extractedmanually.TABLE IEXPERIMENTAL RESULTSActual key-wordsTotalkeywordsfoundActual key-words foundMissed key-words59 44 31 13TABLE IIPRECISION, RECALL, ACCURACY AND F-MEASUREPrecision Recall Accuracy F-measure70.45% 52.54% 87.21% 59.67%VI. PERFORMANCE EVALUATIONThere are some inappropriate keyword suggestions gener-ated by Tseng’s algorithm which have a low TF-IDF valuebut a high chi-square value. These terms are coming in thekeyword list for their high co-occurrences with the termshaving a greater TF-IDF value. To solve this problem, we canremove the words with very low TF-IDF score from the givendocument in the beginning. These considerations will help usto increase the performance of our described method. We mayalso try to improve the filtering process to get better result.We have used a Bengali stemmer for stemming the words.Its accuracy is 72%. Again, we did not use any corpus. Wehave used a generic parsing method to extract data fromthe online newspaper sites and its inaccuracy also decreasesthe performance of the given procedure. Hence the betterperformance of these modules will also help the keywordextraction procedure to provide much better result.Tseng’s algorithm used in [12] to identify keywords fromEnglish documents achieved 85.84% accuracy compared tothe accuracy of 80.04% by Alchemy API [13] on the samecontents. Whereas, for keywords identification from Bengalidocument achieved maximum accuracy of 87.21%. If weconsider more documents to compute the TF-IDF of Bengaliwords then this accuracy will be much higher to identifythe keywords from a given document. So, we can concludethat Tseng’s algorithm on top of chi-square measurement ofword co-occurrence bias can be used successfully for Bengalikeyword identification.VII. CONCLUSIONBy using the proposed procedure explained so far, we willbe able to find the Bengali keywords from multiple documentcollection. As the size of document collection increases, theaccuracy with the TF-IDF value to identify important termsfrom a document also increases. The important terms thenhelp us to determine the biased word set more precisely basedon the chi-square distribution. Then the co-occurrence valueof these words can be used to identify the keywords withmore accuracy. In future, we will combine this statisticalapproach with large scale Bengali N-grams analysis and wordembeddings to improve keyword suggestions in Pipilika searchengine [14] [15] [16]. We believe this paper will also help toachieve better results in downstream NLP applications such asidentifying trending topics from the daily newspapers or fromsocial media discussion etc.ACKNOWLEDGMENTThis work has been done using support from Pipilika SearchEngine, a research initiative of Shahjalal University of Scienceand Technology, Sylhet, Bangladesh.APPENDIXWe present the detail example of keyword identificationfrom a document using the proposed method (Figure 5 - 8).Authorized licensed use limited to: Cornell University</s>
<s>Library. Downloaded on September 04,2020 at 03:21:32 UTC from IEEE Xplore. Restrictions apply. Fig. 5. Snippet of the documents from which the keywords have been extracted using the proposed procedureFig. 6. Keywords extracted manuallyFig. 7. Keywords extracted using proposed procedureFig. 8. Keywords not detected by the proposed procedureREFERENCES[1] G. Salton,“Automatic Text Processing,” Addison-Wesley, 1988.[2] M. Andrade and A. Valencia, “Automatic extraction of keywords fromscientific text: application to the knowledge domain of protein families,”Bioinformatics, 1998, 14(7), 600-607.[3] A. Hulth, “Improved automatic keyword extraction given more linguisticknowledge,” In Proceedings of EMNLP, 2003, pages 216223.[4] M. Rada and P. Tarau, “Textrank: Bringing order into texts,” In Proceed-ings of EMNLP 2004 (ed.Lin Dand WuD), pp.404411., Association forComputational Linguistics, Barcelona, Spain.[5] S. Beliga, A. Metrovi, and S. Martini-Ipi, “An overview of graph-basedkeyword extraction methods and approaches,” Journal of informationand organizational sciences 39, no. 1 (2015): 1-20.[6] C. Zahang, “Automatic keyword extraction from documents using con-ditional random fields,” Journal of Computational Information Systems4, no. 3 (2008): 1169-1180.[7] T. Tomokiyo and M. Hurst, “A language model approach to keyphraseextraction,” Proceedings of the ACL 2003 workshop on Multiwordexpressions: analysis, acquisition and treatment-Volume 18. Associationfor Computational Linguistics, 2003.[8] Y. Matsuo and M. Ishizuka, “Keyword extraction from a single documentusing word co-occurrence statistical information,” International Journalon Artificial Intelligence Tools, 2004, 13(1 ): 157-170.[9] Y. Tseng, “Multilingual keyword extraction for term suggestion,” ACMNew York, NY, USA, 1998, 377-378.[10] “Term frequency and inverse document frequency,” Online:http://en.wikipedia.org/wiki/Tf*idf, Accessed on Augest 31, 2018.[11] “Chi-square distribution,” Online: http://en.wikipedia.org/wiki/Chi-squared distribution, Accessed on Augest 31, 2018.[12] M. Mahfuzur Rahman and M. Ruhul Amin, “Language IndependentStatistical Approach for Extracting Keywords,” Proceedings of the 4thInternational Conference on Advances in Electrical Engineering, IEEE,2017.[13] “Keyword Extraction demo of Alchemy API,” Online:https://github.com/AlchemyAPI, Accessed on Augest 31, 2018.[14] A. Ahmad and M. Ruhul Amin, “Bengali word embeddings and it’sapplication in solving document classification problem,” Proceedingsof the 19th International Conference on Computer and InformationTechnology, IEEE, 2016.[15] A. Ahmad, M. Rub Talha, M. Ruhul Amin and F. Chowdhury, “PipilikaN-gram Viewer: An Efficient Large Scale N-gram Model for Bengali,”Proceedings of the International Conference on Bangla Speech andLanguage Processing(ICBSLP), IEEE, 2018.[16] A. Ahmad, M. Rub Talha, M. Ruhul Amin and F. Chowdhury, “BengaliDocument Clustering using Word Movers Distance,” Proceedings ofthe International Conference on Bangla Speech and Language Process-ing(ICBSLP), IEEE, 2018.Authorized licensed use limited to: Cornell University Library. Downloaded on September 04,2020 at 03:21:32 UTC from IEEE Xplore. Restrictions apply.</s>
<s>2018 21st International Conference of Computer and InformationTechnology (ICCIT), 21-23 December, 2018A Comparative Analysis of Word EmbeddingRepresentations in Authorship Attribution ofBengali LiteratureHemayet Ahmed ChowdhuryDepartment of ComputerScience and EngineeringShahjalal University ofScience and TechnologySylhet, BangladeshEmail: hemayetchoudhury@gmail.comMd. Azizul Haque ImonDepartment of ComputerScience and EngineeringShahjalal University ofScience and TechnologySylhet, BangladeshEmail: azizulhaqueimon@gmail.comMd. Saiful IslamDepartment of ComputerScience and EngineeringShahjalal University ofScience and TechnologySylhet, BangladeshEmail: saif.acm@gmail.comAbstract—Word Embeddings can be used by deep layers ofneural networks to extract features from them to learn stylo-metric patterns of authors based on context and co-occurrenceof the words in the field of Authorship Attribution. In thispaper, we investigate the effects of different types of wordembeddings in Authorship Attribution of Bengali Literature,specifically the skip-gram and continuous-bag-of-words(CBOW)models generated by Word2Vec and fastText along with theword vectors generated by Glove . We experiment with denseneural network models, such as the convolutional and recurrentneural networks and analyse how different word embeddingmodels effect the performance of the classifiers and discuss theirproperties in this classification task of Authorship Attribution ofBengali Literature. The experiments are performed on a dataset we prepared, consisting of 2400 on-line blog articles from 6authors of recent times.Keywords—word embeddings, bengali literature, authorship attri-bution, skip-gram, continuous-bag-of-words, fastText, Word2Vec,Glove, deep learning, convolutional neural network, recurrentneural networkI. INTRODUCTIONAuthorship Attribution is the task of identifying the originalauthor of a given piece of text, by analyzing previous of worksof the authors in question. In traditional methods of authorshipattribution, independent features such as lexical n-grams orfrequency based word embeddings are used to represent thetext, which are very similar to one-hot encodings. As well asthe methods perform, in such approaches, the word represen-tations are created independent of each other’s meanings andwords of similar contexts seem to be represented in differentvector spaces, which is problematic for detecting the semanticvalues of words. Word embeddings, also generally known asdistributed term representations, take a different approach increating representations such that each term is encoded as low-dimensional dense vectors which are not discrete, but contin-uous on contrast to the one-hot representations. This approachallows the word embeddings to encode semantic and syntacticsimilarity between words by capturing the extent to whichwords appear in contexts that are similar [8]. Word2Vec, oneof the most popular set of algorithms used for implementingword embeddings in modern times was proposed by Mikolovand Dean[14]. Other technologies like Glove and Facebook’sfastText can also be used for word embeddings[11]. Twocommon variants offered by Word2Vec and fastText are theskip-gram model, which predicts the surrounding words givena target word and the continuous bag-of-words(CBOW), whichpredicts the target word, given the context of the neighboringwords. The Glove model varies slightly in this aspect, such thatit uses a method of maximizing the probability in alignment ofwords and statistics based on co-occurrence to generate wordembeddings.Due to their ability to hold context and semantic meaningsof words, our approach in this paper was to investigate howword embeddings perform in the task of Authorship Attribu-tion in Bengali. Here, we will discuss different types of embed-dings, their properties, advantages and disadvantages. We willalso present analysis of how the performance rates of differentneural network classifiers react to the</s>
<s>different variants ofword embeddings. No work, analysis or investigation has beenpublished on the effect of word embeddings in AuthorshipAttribution of Bengali Literature as of our knowledge to date.II. RELATED WORKSA. On Author AttributionStudies regarding author attribution has been going on forquite some time now. Initial works had Mosteller and Wallaceworking with distribution of 30 function words comprising ofconjunctions, prepositions and articles on federalist papers toattribute to the original authors[15].Bogdanova and Lazaridouconducted experiments with cross-language Authorship Attri-bution, using books from 6 English authors along with theirSpanish translations and eventually proposed that Machine978-1-5386-9242-4/18/$31.00 c©2018 IEEETranslation could be used as a starting point for cross-languageAuthorship Attribution[1]. Nasir et al.[16] approached Au-thorship Attribution as semi-supervised anomaly detectionvia multiple kernel learning, whereas Zhao et al.[25] usedKullback-Leibler divergence with Dirichlet smoothing on AP,Gutenberg, and Reuters-21578 corpora to obtain impressiveresults. Sanderson and Guenter [2006] studied character andword sequence kernels for authorship attribution of short texts.Two Markov chain approaches were used to compare theperformances[21]. Application of several configurations of se-quence kernels on a multi-topic dataset of 50 authors resultedin better performance of character sequence kernels than wordsequence kernels. These observations suggested that amount oftraining data had more of an impact on discrimination powerthan size of text data.In 2007 Jonathan H. Clark attemptedauthorship attribution with the use of synonym-based featuresfor their experiment[4] where as,Bozkurt I. N. et Al. took adifferent but very effective approach with stylometry alongwith features like Vocabulary Diversity, Bag of Words andFrequency of Function words (article, pronoun, conjunction) toidentify writing characteristics of five Milliyet columnists[2].Compared to the progress in research regarding authorshipattribution in English and German literature, such researchwork for Bangla has not yet set a high benchmark. Onlythree notable research works can be identified for Bangla.Dasand Mitra studied authorship attribution and worked on adata set of three authors consisting of a total of 36 docu-ments[7].They used uni-gram and bi-gram features along witha probabilistic classification method.The uni-gram yielded90% accuracy while the bi-gram yielded a staggering 100%accuracy.However, their data set was small and the authorshad very different styles of writing, which made it easier forclassification.Chakraborty performed a ten-fold cross-validation on threeclasses and showed that SVM classifiers can provide bestaccuracy of up to 84%[3].Jana looked into Sister Niveditasinfluence on Jagadish Chandra Boses writings[10] but noclassification was performed. Other than these three works,Shanta Phani also attempted attribution on three authors usingmachine learning techniques, much similar to Suprabhat Dasswork[19].P. Das, R. Tasmim, and S. Ismail have performed experimentson four Bangladeshi authors of current time using featureslike word frequency , word and sentence length, type-tokenratio, number of conjunction and pronoun, etc[6]. Hossainand Rahman developed a voting system with multiple featuresclassified with Cosine similarity ,achieving an accuracy of90.67%[9]. Pal, Siddika and Ismail achieved an accuracy of90.74% based on 6 authors with a support vector machine ona single feature[17].None of these works crossed a mark of90% accuracy satisfactorily.Excluding the work with multilayered perceptrons by Phani,Lahiri and Biswas[18], we did not find much work done withneural networks, and absolutely none at all with LSTM orconvolutional neural nets or word embeddings.B. On Word EmbeddingsTripodi et</s>
<s>al. [23] analyzed the performances of CBOWand Skip-gram algorithms for Italian language by tuningdifferent hyper parameter. Vine et al. [13] investigated theuse of unsupervised features which were derived from theword embedding approaches and found results that indicatesthat the use of word embeddings improve the effectivenessof concept extraction method. In 2017,Haixia Liu attemptedcitation sentiment analysis using Word2Vec and found thatword embeddings are effective for classifying positive andnegative citation[12]. Convolutional neural network with wordembeddings as it’s feature was used by Santos et al. [22]which concluded that both fastText and Word2Vec outperformthe baseline models like Support Vector Machine, RandomForest, Logistic Regression etc. Rudkowsky et al. [20] foundthat word embeddings have potential to improve on currentbag-of-words approaches in the field of sentiment analysisin the social sciences.Joulin et al. [11] found that fastTextclassifier gives a high accuracy which is on par with deeplearning classifiers and also faster for training and evaluation.III. METHODOLOGYA. CorpusThe dataset that yielded high accuracies for algorithms,such as the voting system and cosine similarity[9], was ourprimary target for implementing our training models. Furtheranalysis showed that high accuracies were only achieved forthe dataset upon implementation of shallow models, such asthe Support Vector Machine (SVM) and Naive Bayes(NB). Itwas basically due to the limited size and lack of sparsity of thedataset. Following such observations, we developed a customweb parser to accumulate our own dataset and another parserto normalize the collected stream of raw data. Due to privacyand copyright issues, the authors have been code-named asFE, HM, EJ, MT, RN and RG in this paper. Table 1 illustratestotal size of the corpus in words, for each author.Author Total WordsFE 2382241HM 3372688EJ 2401315MT 2515428RN 1418598RG 2454574TABLE ICORPUS SIZE PER AUTHORFrom table 1 and Figure 1, we can say that the corpusis moderately balanced. This is due to the fact that all thedocuments were collected from online blogs, which makesthe corpus realistic and sparse.B. Word EmbeddingsA word embedding format basically tries to store words in avector space as a vector representation. In practice, this usuallymeans that word embeddings are placed in a high dimensionalspace where the embeddings of similar or related words16%17%23%10%17%17%Fig. 1. Word Distribution Per Authorare close to each other and different word embeddings areplaced far from each other.Since, Machine learning includingdeep learning cannot process strings, p lain texts and requirevectorized version of the texts for operations, word embeddingwas the preferred approach. Word embedding is generallyclassified into two classes, which are:• Frequency based embedding• Prediction based embeddingWe opted to go with the prediction based embedding. But,such word embedding methods had limitations until Mikolovet. al.[14] introduced word2vec to the NLP society. A tasklike King man +woman= Queen was achieved, based on hisintroduced prediction based methods. Further work on thesemethods resulted in more efficient frameworks such as GloVeand FastText.Generally, prediction based word embeddings can be seen intwo different flavours - CBOW and Skip-gram. But, the GloVeframework mentioned above uses a third type of technology.1) Continuous Bag of Words: The CBOW model tries topredict the probability of a word, according to its context.Representation of the context is like a</s>
<s>bag of the containedwords, in a fixed sized window, around the target word. Theexample below will illustrate this concept further. If a corpusC represents the text ”Hey, this is a sample corpus using onlyone context word, given the context window is set to be 1, thecorpus can be configured in the following way to be a trainingset:So, the target for the data point will appear a lot like this:The single word architecture of a CBOW model is givenbelow for better understanding:The base concept of multi-word architecture remains thesame but the architecture is a bit more complex. The diagramof multi-word CBOW model architecture is presented below:Hence, CBOW basically is like predicting a word if thecontext is given.Fig. 2. CBOW training set configurationFig. 3. CBOW targetFig. 4. CBOW architecture for single wordFig. 5. CBOW architecture for multi-words2) Skip-Gram: The Skip-Gram architecture is actually aninverted architecture of CBOWs. On contrary to the CBOWmodel, the skip gram model is basically like predicting thecontext if a word is given. Here, more distant words are givenless weight by randomly sampling them.While defining thewindow size parameter, only the maximum window size canbe configured. Actual window size varies from 1 to max sizeand are chosen at random.If a context window size of 1 is chosen for both CBOWand skip-gram models, two one hot encoded target variableswill act as the targets along with two corresponding outputs.Two error vectors which will be achieved by calculating twodifferent errors with corresponding to two target variables,will be added element-wise to get a final error vector. Aftertraining, the weights between the input and the hidden layerwill be considered as the word vector representation . The lossfunction or the objective is pretty much the same type as ofthe CBOW model. The architecture of Skip-Gram model liesbelow:Fig. 6. Skip-gram architecture3) GloVe: GloVE has quite a similar working mechanismto the Word2Vec model mentioned above. Though word2vecpredicts context given words, GLOVE learns by constructinga co-occurrence matrix, that basically counts how frequently aword appears in a context. Since the matrix becomes gigantic,we factorize this matrix for a lower-dimension representation.When modeling the loss function, the glove model basicallytries to use the probability ratio of the words that appear inthe context.For our analysis purpose, we used the following embeddingsfor training on our dataset, each consisting of 100 dimensions:• CBOW model by Word2Vec• Skip-Gram model by Word2Vec• CBOW model by fastText• Skip-Gram model by fastText• Word Embeddings by GloVeIn generating the word level embeddings based on the 5models mentioned above, we used the following parametersfor all 5 of the models, for fair experimentation.• Vector Dimensions of length 100.• Context window of 5.• 5 learning iterations.C. Architectures of Proposed ModelsDeep learning is a specialized branch of machine learningwhich can learn features directly from the given datasets,without having any prior knowledge of the features. Generally,deep learning approaches are one of the methods for learningpatterns from word embeddings. On the held out dataset, weimplemented customized architectures of the following deeplearning models, with word embeddings as their input, toobtain an analysis of how different word embeddings</s>
<s>performwith different types of neural networks.• A Convolutional Neural Network.(CNN)• A Classical Multi-layered Perceptron.(NN)• A Recurrent Neural Network with Long Short TermMemory.(RNN with LSTM)1) The Convolutional Neural Network Model: The CNNarchitecture we use is a modification of the one used byCollobert et al. [5]. Our model takes as input, a text whichis padded to a length of 2000 words. The CNN is based on asingle channel, where we represent the text as a concatenationof its word embeddings generated by the different algorithmspreviously mentioned. The convolutional layer slides a filterof window size 64 over the input channel. The filter, inturn, generates a new feature for a window of words. Theapplication of the filter over each possible window of wordsin the sentence produces a feature map. Max-over-time pooling[5] in turn condenses this feature vector to its most importantfeature by taking its maximum value and naturally dealswith variable input lengths. A final softmax layer takes theconcatenation of the maximum values of the feature mapsproduced by all filters and outputs a probability distributionover all candidate authors.Collobert et al. [5] used a CNN with a non-static wordembedding channel where the vectors are modified duringtraining using backpropagation. Our approach varies in thesense that the word embedding channel is kept static and thevectors that are pre-trained by algorithms such the Word2Vec,fastText and Glove are not modified during training.2) The Multi-layered Perceptron model: The multi-layeredperceptron model is also similar to that of the CNN ar-chitecture we used. It takes the concatenation of the wordembeddings as input to the first layer. 2 more dense layerswith ’RELU’ activation function are added to extract the mostimportant features and patterns from the word embeddings.A final softmax layer outputs the probability distribution ofthe authors present in the data set. The pre-trained wordembeddings are not modified during training.3) The RNN model: The Recurrent Neural Network archi-tecture we proposed uses LSTM gates. The word embeddingrepresentations of the padded texts are the input for a seriesof non-linear operations in the LSTM layer [24]. Similar tothe MLP architecture, 2 more hidden layers are used to extractthe features from the representations. Again, a final softmaxlayer outputs the probability distribution of the authors, whilethe pre-trained word vectors are kept unmodified.IV. EXPERIMENTS AND RESULT ANALYSISTo analyze the performance of the word embeddings, themodels were trained on 2100 articles, and tested on the other300. We settled on a testing data set size of 12.5% insteadof opting for the 80:20 ratio due to the moderate size of thedata set. Increasing the testing data size in this case wouldsignificantly hamper the training our models would receive.In the following figure, the performance of different wordembeddings is shown using different classifiers specifically theNN, RNN with LSTM and CNN.Fig. 7. Performance Analysis of Different Feature Sets with Deep NeuralNetworksClassifier Representation AccuracyANN fastText(skip-gram) 85.46%fastText(CBOW) 76.58%W2V(skip-gram) 68.56%W2V(CBOW) 56.85%Glove 59.19%RNN fastText(skip-gram) 89.6%fastText(CBOW) 83.61%W2V(skip-gram) 75.25%W2V(CBOW) 74.58%Glove 69.56%CNN fastText(skip-gram) 92.9%fastText(CBOW) 75.58%W2V(skip-gram) 88.29%W2V(CBOW) 80.60%Glove 86.95%TABLE IIPERFORMANCE ANALYSIS OF DIFFERENT FEATURE SETS WITH DEEPNEURAL NETWORKSFrom Figure 7 and Table 2, we can see that the convolutionalneural networks seem to work best when combined</s>
<s>with word-embeddings on the held out data set.The most important observation of this study, however, isseen in the diagrammatic figures, where the skip-gram modelstend to outperform the CBOW and embeddings by Glove.Since the task was authorship attribution, the skip gram mod-els’ nature of trying to predict the context from words seemedto work best in this case. The skip-gram word embeddings byfastText seem to outperform the word representations by theother algorithms on almost all models.V. CONCLUSIONAs of date, no research has been published investigatingthe effects of word embeddings with deep neural networksfor authorship attribution in Bengali. Word embeddings canhold semantic values of bengali terms, which is a key attributein the field of authorship attribution in this language. Ourcontribution in this paper was exploring the use of word em-beddings for authorship attribution and providing a comparisonbetween the different word representation algorithms, to showwhich works best in Bengali Language.Wwe discussed thedifferent types of word embeddings in depth, along with theirperformance analysis when combined with different neuralnetwork models and came to the conclusion that the skip-gram word embeddings by fastText tend to perform betterthan embeddings by Word2Vec or Glove in this specific task.For future work, we would like to investigate character levelembeddings by different algorithms and how they compare toword level embeddings in the task of Authorship Attributionin Bengali Language.REFERENCES[1] Dasha Bogdanova and Angeliki Lazaridou. “Cross-language authorship attribution”. In: Proceedings of the9th International Conference on Language Resourcesand Evaluation (2014).[2] I. N. Bozkurt, O. Baglioglu, and E. Uyar. “Authorshipattribution”. In: 2007 22nd international symposium oncomputer and information sciences (Nov. 2007), pp. 1–[3] Tanmoy Chakraborty. “Authorship Identification UsingStylometry Analysis in Bengali Literature”. In: CoRR(2012).[4] Jonathan H. Clark and Charles J. Hannon. “A ClassifierSystem for Author Recognition Using Synonym-BasedFeatures”. In: MICAI 2007: Advances in Artificial In-telligence. 2007, pp. 839–849.[5] Ronan Collobert et al. “Natural Language Processing(Almost) from Scratch”. In: J. Mach. Learn. Res. 12(Nov. 2011), pp. 2493–2537. ISSN: 1532-4435.[6] P. Das, R. Tasmim, and S. Ismail. “An experimentalstudy of stylometry in bangla literature”. In: ElectricalInformation and Communication Technology (EICT),2015 2nd International Conference (2015).[7] Suprabhat Das and Pabitra Mitra. “Author identificationin Bengali literary works”. In: Pattern Recognition andMachine Intelligence, (2011).[8] J. Firth. “A Synopsis of Linguistic Theory 1930-1955”.In: Studies in Linguistic Analysis. reprinted in Palmer,F. (ed. 1968) Selected Papers of J. R. Firth, Longman,Harlow. Philological Society, Oxford, 1957.[9] M. Tahmid Hossain et al. “A stylometric analysis onBengali literature for authorship attribution”. In: Com-puter and Information Technology (ICCIT), 2017 20thInternational Conference of IEEE (2017).[10] Siladitya Jana. “Sister Nivedita’s influence on J. C.Bose’s writings”. In: (2015).[11] Armand Joulin et al. “Bag of tricks for efficient textclassification”. In: arXiv preprint arXiv:1607.01759(2016).[12] Haixia Liu. “Sentiment analysis of citations us-ing word2vec”. In: arXiv preprint arXiv:1704.00177(2017).[13] and Mahnoosh Kholghi et al. “Analysis of Word Em-beddings and Sequence Features for Clinical Informa-tion Extraction”. In: Proceedings of the AustralasianLanguage Technology Association Workshop 2015. Par-ramatta, Australia, 2015, pp. 21–30.[14] Tomas Mikolov et al. “Efficient Estimation ofWord Representations in Vector Space”. In: CoRRabs/1301.3781 (2013).[15] Frederick Mosteller and David L. Wallace. “Inferencein an Authorship Problem”. In: Journal of the</s>
<s>AmericanStatistical Association 58.302 (1963), pp. 275–309.[16] A. Jamal Nasir, Nico Gornitz, and Ulf Brefeld. “An off-the-shelf approach to authorship attribution”. In: (2014).[17] U. Pal, A. S. Nipu, and S. Ismail. “A machine learningapproach for stylometric analysis of Bangla literature”.In: 2017 20th International Conference of Computerand Information Technology (ICCIT). Dec. 2017, pp. 1–[18] S. Phani, S. Lahiri, and A. Biswas. “A machine learningapproach for authorship attribution for Bengali blogs”.In: (Nov. 2016), pp. 271–274.[19] S. Phani et al. “Authorship attribution in bengali lan-guage”. In: ().[20] Elena Rudkowsky et al. “More than Bags of Words:Sentiment Analysis with Word Embeddings”. In: Com-munication Methods and Measures 12.2-3 (2018),pp. 140–157.[21] Conrad Sanderson and Simon Guenter. “Short TextAuthorship Attribution via Sequence Kernels, MarkovChains and Author Unmasking: An Investigation”. In:Proceedings of the 2006 Conference on Empirical Meth-ods in Natural Language Processing. EMNLP ’06.Sydney, Australia, 2006, pp. 482–491.[22] Igor Santos, Nadia Nedjah, and Luiza de MacedoMourelle. “Sentiment analysis using convolutional neu-ral network with fastText embeddings”. In: Computa-tional Intelligence (LA-CCI), 2017 IEEE Latin Ameri-can Conference on. IEEE. 2017, pp. 1–5.[23] Rocco Tripodi and Stefano Li Pira. “Analysisof Italian Word Embeddings”. In: arXiv preprintarXiv:1707.08783 (2017).[24] Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals.“Recurrent Neural Network Regularization”. In: CoRRabs/1409.2329 (2014). arXiv: 1409 . 2329. URL: http ://arxiv.org/abs/1409.2329.[25] Ying Zhao, Justin Zobel, and Phil Vines. “Using Rela-tive Entropy for Authorship Attribution”. In: Informa-tion Retrieval Technology. 2006, pp. 92–105.</s>
<s>untitledSee discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/332583995An Approach for Detection and Correction of Missing Word in BengaliSentenceConference Paper · February 2019DOI: 10.1109/ECACE.2019.8679416CITATIONSREADS1806 authors, including:Some of the authors of this publication are also working on these related projects:Identification of Expectancy, Proximity, and Compatibility of the Bengali Language View projectSpell Checker:Bengali View projectM. Firoz Mridha Ph. D.Bangladesh University of Business and Technology (BUBT)60 PUBLICATIONS 108 CITATIONS SEE PROFILEMashod RanaArollo Tech Ltd.3 PUBLICATIONS 1 CITATION SEE PROFILEMd Abdul HamidKing Abdulaziz University11 PUBLICATIONS 3 CITATIONS SEE PROFILEEyaseen ArafatThe University of Asia Pacific3 PUBLICATIONS 1 CITATION SEE PROFILEAll content following this page was uploaded by M. Firoz Mridha Ph. D. on 29 July 2019.The user has requested enhancement of the downloaded file.https://www.researchgate.net/publication/332583995_An_Approach_for_Detection_and_Correction_of_Missing_Word_in_Bengali_Sentence?enrichId=rgreq-881f201bcf70752d9a19234e72a83f38-XXX&enrichSource=Y292ZXJQYWdlOzMzMjU4Mzk5NTtBUzo3ODU5MDgzNDExNDU2MDNAMTU2NDM4NjU1ODgyMw%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/332583995_An_Approach_for_Detection_and_Correction_of_Missing_Word_in_Bengali_Sentence?enrichId=rgreq-881f201bcf70752d9a19234e72a83f38-XXX&enrichSource=Y292ZXJQYWdlOzMzMjU4Mzk5NTtBUzo3ODU5MDgzNDExNDU2MDNAMTU2NDM4NjU1ODgyMw%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Identification-of-Expectancy-Proximity-and-Compatibility-of-the-Bengali-Language?enrichId=rgreq-881f201bcf70752d9a19234e72a83f38-XXX&enrichSource=Y292ZXJQYWdlOzMzMjU4Mzk5NTtBUzo3ODU5MDgzNDExNDU2MDNAMTU2NDM4NjU1ODgyMw%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Spell-CheckerBengali?enrichId=rgreq-881f201bcf70752d9a19234e72a83f38-XXX&enrichSource=Y292ZXJQYWdlOzMzMjU4Mzk5NTtBUzo3ODU5MDgzNDExNDU2MDNAMTU2NDM4NjU1ODgyMw%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-881f201bcf70752d9a19234e72a83f38-XXX&enrichSource=Y292ZXJQYWdlOzMzMjU4Mzk5NTtBUzo3ODU5MDgzNDExNDU2MDNAMTU2NDM4NjU1ODgyMw%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/M_Ph_D?enrichId=rgreq-881f201bcf70752d9a19234e72a83f38-XXX&enrichSource=Y292ZXJQYWdlOzMzMjU4Mzk5NTtBUzo3ODU5MDgzNDExNDU2MDNAMTU2NDM4NjU1ODgyMw%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/M_Ph_D?enrichId=rgreq-881f201bcf70752d9a19234e72a83f38-XXX&enrichSource=Y292ZXJQYWdlOzMzMjU4Mzk5NTtBUzo3ODU5MDgzNDExNDU2MDNAMTU2NDM4NjU1ODgyMw%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/M_Ph_D?enrichId=rgreq-881f201bcf70752d9a19234e72a83f38-XXX&enrichSource=Y292ZXJQYWdlOzMzMjU4Mzk5NTtBUzo3ODU5MDgzNDExNDU2MDNAMTU2NDM4NjU1ODgyMw%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Mashod_Rana?enrichId=rgreq-881f201bcf70752d9a19234e72a83f38-XXX&enrichSource=Y292ZXJQYWdlOzMzMjU4Mzk5NTtBUzo3ODU5MDgzNDExNDU2MDNAMTU2NDM4NjU1ODgyMw%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Mashod_Rana?enrichId=rgreq-881f201bcf70752d9a19234e72a83f38-XXX&enrichSource=Y292ZXJQYWdlOzMzMjU4Mzk5NTtBUzo3ODU5MDgzNDExNDU2MDNAMTU2NDM4NjU1ODgyMw%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Mashod_Rana?enrichId=rgreq-881f201bcf70752d9a19234e72a83f38-XXX&enrichSource=Y292ZXJQYWdlOzMzMjU4Mzk5NTtBUzo3ODU5MDgzNDExNDU2MDNAMTU2NDM4NjU1ODgyMw%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Abdul_Hamid4?enrichId=rgreq-881f201bcf70752d9a19234e72a83f38-XXX&enrichSource=Y292ZXJQYWdlOzMzMjU4Mzk5NTtBUzo3ODU5MDgzNDExNDU2MDNAMTU2NDM4NjU1ODgyMw%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Abdul_Hamid4?enrichId=rgreq-881f201bcf70752d9a19234e72a83f38-XXX&enrichSource=Y292ZXJQYWdlOzMzMjU4Mzk5NTtBUzo3ODU5MDgzNDExNDU2MDNAMTU2NDM4NjU1ODgyMw%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/King_Abdulaziz_University?enrichId=rgreq-881f201bcf70752d9a19234e72a83f38-XXX&enrichSource=Y292ZXJQYWdlOzMzMjU4Mzk5NTtBUzo3ODU5MDgzNDExNDU2MDNAMTU2NDM4NjU1ODgyMw%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Abdul_Hamid4?enrichId=rgreq-881f201bcf70752d9a19234e72a83f38-XXX&enrichSource=Y292ZXJQYWdlOzMzMjU4Mzk5NTtBUzo3ODU5MDgzNDExNDU2MDNAMTU2NDM4NjU1ODgyMw%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Eyaseen_Arafat?enrichId=rgreq-881f201bcf70752d9a19234e72a83f38-XXX&enrichSource=Y292ZXJQYWdlOzMzMjU4Mzk5NTtBUzo3ODU5MDgzNDExNDU2MDNAMTU2NDM4NjU1ODgyMw%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Eyaseen_Arafat?enrichId=rgreq-881f201bcf70752d9a19234e72a83f38-XXX&enrichSource=Y292ZXJQYWdlOzMzMjU4Mzk5NTtBUzo3ODU5MDgzNDExNDU2MDNAMTU2NDM4NjU1ODgyMw%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/The_University_of_Asia_Pacific?enrichId=rgreq-881f201bcf70752d9a19234e72a83f38-XXX&enrichSource=Y292ZXJQYWdlOzMzMjU4Mzk5NTtBUzo3ODU5MDgzNDExNDU2MDNAMTU2NDM4NjU1ODgyMw%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Eyaseen_Arafat?enrichId=rgreq-881f201bcf70752d9a19234e72a83f38-XXX&enrichSource=Y292ZXJQYWdlOzMzMjU4Mzk5NTtBUzo3ODU5MDgzNDExNDU2MDNAMTU2NDM4NjU1ODgyMw%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/M_Ph_D?enrichId=rgreq-881f201bcf70752d9a19234e72a83f38-XXX&enrichSource=Y292ZXJQYWdlOzMzMjU4Mzk5NTtBUzo3ODU5MDgzNDExNDU2MDNAMTU2NDM4NjU1ODgyMw%3D%3D&el=1_x_10&_esc=publicationCoverPdf2019 International Conference on Electrical, Computer and Communication Engineering (ECCE), 7-9 February, 2019 978-1-5386-9111-3/19/$31.00 ©2019 IEEE An Approach for Detection and Correction of Missing Word in Bengali Sentence M. F. Mridha Department of Computer Science and Engineering University of Asia Pacific Dhaka, Bangladesh firoz@uap-bd.edu Md. Eyaseen Arafat Khan Department of Computer Science and Engineering University of Asia Pacific Dhaka, Bangladesh eyaseenarafatkhan08@gmail.com Md. Mashod Rana Department of Computer Science and Engineering University of Asia Pacific Dhaka, Bangladesh mashod0rana@gmail.com Md. Masud Ahmed Department of Computer Science and Engineering University of Asia Pacific Dhaka, Bangladesh mdmasudrana81uap@gmail.com Md. Abdul Hamid, Member, IEEE Department of Computer Science and Engineering University of Asia Pacific Dhaka, Bangladesh ahamid@uap-bd.edu Mohammad Tipu Sultan Department of Computer Science and Engineering University of Asia Pacific Dhaka, Bangladesh tipu07u5@gmail.com Abstract—Auto-correction for missing word in a sentence is not so easy. Also, it is found more challenging for the Bengali language. Our rigorous study reveals the fact that no significant research works have been done for the Bengali Language on this very topic. In this paper, we proposed a method that can detect the missing word and provide a suggestion list correspond to missed word with 82.82% accuracy. We have used n-gram model to find whether a word is missing between two words from a sentence or not. Then, we have used probability scoring to rank the suggestion list after finding the probable words for the missed word. We have used a corpus for making the decision which is the collection of bigram and another corpus is used for preferable word for missed word which is a collection of the trigram. Finally, we have used another six corpora to evaluate our proposed method. All corpora are created by us using the data collected from the web. Keywords—NLP, N-gram, Missing Word Error, Bengali Language. I. INTRODUCTION We communicate with each other through languages. Bengali is the primary language in Bangladesh and the second most spoken language in India. Bengali is one of the most widely spoken languages with around 250 million people throughout the world. To communicate and keep records in the purpose of official and nonofficial, we use textual representation. In the computerized system, it is not so easy to process the Bengali Language because of its complex orthographical rules and critical grammatical</s>
<s>rules that are quite so hard to follow. That is why it becomes a common expectation for auto-correction in our text which is known as spelling correction. The duty of a spell checker is to detect errors and also to provide the suggestions. The absence of a word in our known dictionary is a spelling error. Kukich [1] has said about two types of error: non-word error and real word error. However, in this paper, we are not going to work with these types of error. We are focusing on how to detect the word which is unintentionally missed during typing in a sentence. In this paper, we call this missing word error. Missing word error is a sentence level error. This excessively occurs when we are typing a large article. It also occurs during conversion of OCR. The example shown below helps us to understand the error. Example: “ei েজলায় তার চা বাগান েথেক সবেচেয় ভাল চা ৈতির হয় eবং েসখােন কখনo িমক িবেkাভ হয়িন। ei না হoয়ার েপছেন েটা কারণ আেছ। মািলক বেলন, মািলক িহেসেব িতিন খুবi েপশাদার। িতিন ভাল েবতন o স্ুেযাগ সুিবধা িদেয় থােকন। িনnুেকর বkবয্ হল, িমকেদর মেধয্ যারা েনতা হেত চায় তােদর িতিন িকেন েফেলন বা সিরেয় েদন।” This is a correct paragraph in Bengali described above. Now, if anyone types this paragraph is shown below: “ei েজলায় তার চা বাগান েথেক সবেচেয় চা ৈতির হয় eবং েসখােন কখনo িমক হয়িন। ei না হoয়ার েপছেন েটা কারণ আেছ। মািলক বেলন, মািলক িহেসেব িতিন খুবi েপশাদার। িতিন ভাল েবতন o স্ুেযাগ সুিবধা থােকন। িনnুেকর বkবয্ হল, িমকেদর মেধয্ যারা েনতা হেত চায় তােদর িতিন েফেলন বা সিরেয় েদন।” If we match these two paragraphs, we can easily realize that some words are missing from the second one. And the words are marked with red colour in the following paragraph: “ei েজলায় তার চা বাগান েথেক সবেচেয় ভাল চা ৈতির হয় eবং েসখােন কখনo িমক িবেkাভ হয়িন। ei না হoয়ার েপছেন েটা কারণ আেছ। মািলক বেলন, মািলক িহেসেব িতিন খুবi েপশাদার। িতিন ভাল েবতন o স্ুেযাগ সুিবধা িদেয় থােকন। িনnুেকর বkবয্ হল, িমকেদর মেধয্ যারা েনতা হেত চায় তােদর িতিন িকেন েফেলন বা সিরেয় েদন।” Usually, during typing, unintentionally we give up some words, usually more on a large article and also it happens when processing data by automated software. As stated earlier, we are going to say this as missed word error and this not an unfamiliar issue to us. This is a common occurrence in our daily working who works in the text editor and similar services. It also happens during a chat, writing e-mail etc. So this is not an issue that may be ignored. This fact drives us to propose a method to solve this missed word error problem. To the best of our knowledge, no research work has been done on this topic in the Bengali Language. The rest of the paper is organized as follows. In Section II similar work is discussed. In Section III proposed framework of missed word</s>
<s>error is described. Section IV describes our methodologies. Section V and VI present our analysis and results, respectively. Finally, Section VII concludes our works. II. SIMILAR WORK During the study, we have been trying to extract the method that has already used to solve this type of problem. But we did not get any satisfactory result. The resources are not rich to solve exactly this type of problem. We have found a work which is done by Ahmed BEN SALAH [2]. They tried to detect the missing component which would be text or graphics in OCR output. They took the empty area and background since the error could occur only there. They learnt each element detected by OCR and found the nearest element for the targeted area. Our work is somewhat similar to this work. However, we are going to detect the missing word from a sentence which occurs not only after conversion by OCR but also when a man does this type of error unintentionally during the typing in the text editor. In Bengali, this type of work is not done yet as far our study reveals, but numerous works have been done on the purpose of spelling check. Bidyut Baran Chaudhuri [3] developed a model which can deal with non-word error in Bengali sentence. UzZaman and Khan [4] proposed a Double Metaphone encoding system which was used for Bengali. This encoding can be used to improve spell check in Bengali. It was only used for the misspelled word. Nur Hossain Khan [5] used characters of n-gram to check the correctness of Bengali words and there was no specification about missing words identification. Prianka Mandal [6] developed a clustering based Bengali spell checker which is also for non-word error, where Partitioning Around Medoids algorithm is used. We are going to detect the missing words by using n-gram model with the help of probability. N-gram is used in [7] and in [8], to detect real word error in English language and for other NLP purpose. Therefore, it is the very first work on the Bengali language to detect the missing word in a sentence using n-gram. III. PROPOSED FRAMEWORK To build the method we need: (i) First, we need to decide whether a word is missing or not, and (ii) If missed, we should provide a suggestion list against the missed word. From a sentence first we will generate bigrams. If a bigram exists in our corpus, then it is alright, otherwise, it declares as an error. To provide suggestion list, we search in our trigram corpus where bigram first and the last word be trigram first and last word respectively and middle word be our suggestion word which is the missing word. IV. METHODOLOGIES Our methodologies consist of the following steps: (i) Data pre-processing, (ii) Generate n-gram, (iii) Detection of the missed word, and (iv) Find the suggestion list (if an error exists). We describe each one in the following. A. Data Preprocessing We have collected the data from</s>
<s>the various online newspapers, blog etc. We have stored data in six different files known as the corpus. From these corpora, by removing the unnecessary symbol we generate bigram and trigram. Then we build two corpora: one is the collection of bigrams with their frequency (count) and another one is the collection of trigrams with their frequency. B. Generate N-gram N-gram is the n numbers of contiguous elements from a sequence of elements. Where elements would be characters, words, speech, text etc. In natural language, n-gram was first introduced by Shanon [9]. The importance of n-gram is that it is language independent [10]. When n=1 it is known as the unigram, bigram for n=2 and trigram for n=3. If a sequence is a collection of n elements, then the bigram and the trigram of the ith word will be Sequence of elements = {e1, e2, ……, en-1, en} Where e for elements. Bigram: eiei+1 Trigram: eiei+1ei+2 For our model, we take a sentence which consists of n number of words. And, we also assume that there have no non-word errors and real word errors in our sentences. Sentence = {w1+w2+…….+wn-1+wn} From this sentence, we will generate bigram. For ith word Bigram = WiWi+1 C. Detection of missing word For every bigram that we generate from the sentence, we will search in our bigram corpus. If we find the bigram in the corpus, then there is no error. But if it fails to match in corpus then we say that there has been an error in the sentence. This means that we expect a word between these two words which is missed unintentionally. Since there is a missed word between these two words we expect the suggestions which will tell us which word could best fit in the place of the missed word. D. Finding the suggestion list When it is decided that a word is missing between the bigram, we need the suggestion list. To extract the suggestion list we take the bigram and from our trigram corpus, we extract the word list with frequency. In this step, we try to find these trigrams where trigram first word is same as our bigram first word and trigram the last word is same as our bigram the last word. If trigram is TW1TW2TW3 (three words) Then, TW1 = bigram first word. TW2 = expected missing word. TW3 = bigram last word. For each trigram that we extract from the trigram corpus, we will calculate the probability using their frequency. The extracted trigram list will be sorted and higher probability trigram will be in the top of the list and only 2nd word of the trigram will be shown on the list. We will use tg as trigram in our equation. List of extracted trigram (tg) = {tg1,tg2,……,tgz} For each trigram, p tgi =frq(tgi)∑ frq(tgj) zj=1 (1) Where, frq(tgi) is the frequency of ith trigram from the list, and the denominator of the equation is the sum of every trigram frequency contains</s>
<s>from the list. In our original code, we do not need to save trigram in the trigram list which also known as suggestion list. Because we just need the missing word that’s why we will store the middle word of the trigram and the trigram frequency. The following steps are summarizing the procedure: Bigram=B1B2; where B1=first word B2=second word Trigram=T1T2T3; where T1=1st word T2=2nd word T3=3rd word. Generate a set of bigram SB from a sentence For 1 to n in SB where n is the number of bigram SB If ith B1B2 is present in bigram corpus No missing word error Else In ith B1B2 a word is missing S=suggestion list Search T1T2T3 in the trigram corpus If T1=B1 and T3=B2 Insert the T2 in S with frequency For each word in S calculate score using (1) Sort the suggestion list S in decreasing order by score and show them. V. ANALYSIS For the evaluation purpose, we have created six corpora where data are collected from the web. To evaluate our method, we injected the errors in our corpora. That means we write a program to delete a word from the sentence in our all corpora. After making error containing corpus, we take the corpus as input and we identify the sentences. From a sentence, we generate bigram and using bigram corpus to identify whether it is correct or not. To provide suggestion list to the corresponding error we used trigram corpus. Here we have presented a description to explain our proposed model with some sentences. We will show how it performs with the example given below: Right: িতিন িবdুেট নামটা িকছুেতi মেন করেত পােরন না। Wrong: িতিন িবdুেট নামটা িকছুেতi করেত পােরন না। Where the word “মেন” is missing in the wrong sentence shown above. Our proposed method, the bigram “িকছুেতi করেত” did not find in the bigram corpus and declares as the error. Then it searches for trigram in trigram corpus where it extracts every matched trigram with possible word that can be inserted between bigram to find missing word and calculate the probability to generate suggestion list Table I shows the suggestion list with the score. TABLE I. Suggestion list for bigram “িকছুেতi করেত” Suggested word Score মেন 0.2857 িব াস 0.1428 বশ 0.1428 pকাশ 0.1428 জুত 0.1428 কlনাo 0.1428 The table shown above provides the correct answer through the list. Right: গত কেয়ক সpাহ ধের েদেশর pধানমিnt েটিলিভশন, েরিডo eবং সংবাদপেt বার বার িববৃিত িদেয়েছন, িকছু ঘটেবনা, সবাi শাn থাkন। Wrong: গত কেয়ক ধের েদেশর pধানমিnt েটিলিভশন, েরিডo eবং সংবাদপেt বার বার িববৃিত িদেয়েছন, িকছু ঘটেবনা, সবাi থাkন। TABLE II. Suggestion list for bigram “কেয়ক ধের” Suggested Words Score বছর 0.3076 িদন 0.2307 সpাহ 0.1538 মাস 0.1538 শতক 0.0769 দশক 0.0769 The word “সpাহ” is correct in this list. TABLE III. Suggestion list for bigram “সবাi থাkন” Suggested Words Score ভাল 0.333 সুেখ 0.333 শাn 0.333 The word “শাn” is the correct word in this list. But it’s rank falls below in the list because</s>
<s>the frequency is the same for all consisting trigram than any other word which is in the list. VI. RESULT Our bigram corpus contains 4,00,000 bigrams and trigram corpus contains 4,43,000 trigrams. We have used the calculation of probabilities for ranking the words and rank is used to provide the suggestion. After applying the method to our all corpora we get the average 82.82% accuracy. On an average, every corpus contains 15,000 sentences. The detailed about the experiment for different corpus are listed below in Table IV. TABLE IV. PERFORMANCE EVALUATION Dataset Name No of error No of error as detected and able to provide suggestion Accuracy Corpus 01 17730 14387 81.14% Corpus 02 15017 12419 82.70% Corpus 03 13523 11448 84.66% Corpus 04 16578 13653 82.36% Corpus 05 7853 6771 86.22% Corpus 06 27059 22282 82.35% Total 97760 80960 82.82% The lowest accuracy is around 81% and the maximum accuracy is around 86% what we gained. There were some reasons that we cannot gain higher accuracy. It could be that a word is missing from the start of the sentence or end of the sentence. However, in this paper, we did not consider this type of cases. Since, usually during typing or chat, we do not miss our starting word, we ignore this type of cases. There are some types of cases that after missing a word from a sentence it is still a correct sentence. In that case, our method fails to detect the missed word error. We did not count that types of detection in our accuracy calculation. However, the method detected the error but failed to provide suggestion because of lack of data on the corpus. Therefore, at present, we are rigorously working on developing various corpora and incorporating deep learning to mitigate these limitations and improve the accuracy of missed word error in the Bengali language. VII. CONCLUSION In this paper, we have proposed an approach for missed word error detection and correction in the Bengali language. We bring out the concept of missed word error that is essentially different than non-word error and real word error in the context of Bengali language. Therefore, we have focused on how to detect the word which is unintentionally missed during typing in a sentence. In this research paper, we have developed a solution to this problem and achieved more than 82% accuracy. We strongly believe that this particular research area opens up the new possibilities for greater improvement of Bengali language. We are working on this topic to incorporate deep learning for accuracy enhancement. In future works, we plan to explore issues of time complexity, statistical analysis for detection accuracy, different corpus for training and testing. REFERENCES [1] K. Kukich, “Techniques for automatically correcting words in text,” ACM Computing Surveys, 24 (4), page 377 - 439, 1992. [2] A. B. Salah, N. Ragot, T. Paquet and T. Paquet, “Adaptive detection of missed text areas in OCR outputs: application to the automatic assessment of OCR quality in mass</s>
<s>digitization projects,” SPIE. Document Recognition and Retrieval XX, Feb 2013, SAN FRANCISCO, United States. 8658, pp.110-122, 2013. [3] B. B. Chaudhuri, “Reversed word dictionary and phonetically similar word grouping based spell-checker to bangla text,” LESAL Workshop, Mumbai, 2001. [4] N. UzZaman and M. Khan, “A double metaphone encoding for bangla and its application in spelling checker,” International Conference on Natural Language Processing and Knowledge Engineering. IEEE, pp. 705–710, 2005. [5] N. H. Khan, G. C. Saha, B. Sarker and M. H. Rahman, “Checking the correctness of Bangla words using n-gram,” International Journal of Computer Application, vol. 89, no. 11, 2014. [6] Prianka Mandal and B M Mainul Hossain, “Clustering-based Bangla spell checker,” 2017 IEEE International Conference on Imaging, Vision & Pattern Recognition (icIVPR), pp. 1-6, April 2017. [7] S. Sharmaa and S. Gupta, “A Correction Model for Real-word Errors,” Procedia Computer Science, vol. 70, pp. 99-106, 2015. [8] P. Samanta and B. B. Chaudhuri, “A simple real-word error detection and correction using local word bigram and trigram,” Proceedings of the 25th Conference on Computational Linguistics and Speech Processing (ROCLING 2013), pp. 211-220, October 2013. [9] C. E. Shannon, “Prediction and entropy of printed English,” Bell system technical journal, vol. 30, no. 1, pp. 50–64, 1951. [10] F. Ahmed, E. W. D. Luca and A. Nürnberger, “Revised n-gram based automatic spelling correction tool to improve retrieval effectiveness,” Polibits, no. 40, pp. 39–48. View publication statsView publication statshttps://www.researchgate.net/publication/332583995 /ASCII85EncodePages false /AllowTransparency false /AutoPositionEPSFiles true /AutoRotatePages /None /Binding /Left /CalGrayProfile (Gray Gamma 2.2) /CalRGBProfile (sRGB IEC61966-2.1) /CalCMYKProfile (U.S. Web Coated \050SWOP\051 v2) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Error /CompatibilityLevel 1.7 /CompressObjects /Off /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages true /CreateJobTicket false /DefaultRenderingIntent /Default /DetectBlends true /DetectCurves 0.0000 /ColorConversionStrategy /LeaveColorUnchanged /DoThumbnails false /EmbedAllFonts true /EmbedOpenType false /ParseICCProfilesInComments true /EmbedJobOptions true /DSCReportingLevel 0 /EmitDSCWarnings false /EndPage -1 /ImageMemory 1048576 /LockDistillerParams true /MaxSubsetPct 100 /Optimize true /OPM 0 /ParseDSCComments false /ParseDSCCommentsForDocInfo true /PreserveCopyPage true /PreserveDICMYKValues true /PreserveEPSInfo false /PreserveFlatness true /PreserveHalftoneInfo true /PreserveOPIComments false /PreserveOverprintSettings true /StartPage 1 /SubsetFonts true /TransferFunctionInfo /Remove /UCRandBGInfo /Preserve /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true /AbadiMT-CondensedLight /ACaslon-Italic /ACaslon-Regular /ACaslon-Semibold /ACaslon-SemiboldItalic /AdobeArabic-Bold /AdobeArabic-BoldItalic /AdobeArabic-Italic /AdobeArabic-Regular /AdobeHebrew-Bold /AdobeHebrew-BoldItalic /AdobeHebrew-Italic /AdobeHebrew-Regular /AdobeHeitiStd-Regular /AdobeMingStd-Light /AdobeMyungjoStd-Medium /AdobePiStd /AdobeSongStd-Light /AdobeThai-Bold /AdobeThai-BoldItalic /AdobeThai-Italic /AdobeThai-Regular /AGaramond-Bold /AGaramond-BoldItalic /AGaramond-Italic /AGaramond-Regular /AGaramond-Semibold /AGaramond-SemiboldItalic /AgencyFB-Bold /AgencyFB-Reg /AGOldFace-Outline /AharoniBold /Algerian /Americana /Americana-ExtraBold /AndaleMono /AndaleMonoIPA /AngsanaNew /AngsanaNew-Bold /AngsanaNew-BoldItalic /AngsanaNew-Italic /AngsanaUPC /AngsanaUPC-Bold /AngsanaUPC-BoldItalic /AngsanaUPC-Italic /Anna /ArialAlternative /ArialAlternativeSymbol /Arial-Black /Arial-BlackItalic /Arial-BoldItalicMT /Arial-BoldMT /Arial-ItalicMT /ArialMT /ArialMT-Black /ArialNarrow /ArialNarrow-Bold /ArialNarrow-BoldItalic /ArialNarrow-Italic /ArialRoundedMTBold /ArialUnicodeMS /ArrusBT-Bold /ArrusBT-BoldItalic /ArrusBT-Italic /ArrusBT-Roman /AvantGarde-Book /AvantGarde-BookOblique /AvantGarde-Demi /AvantGarde-DemiOblique /AvantGardeITCbyBT-Book /AvantGardeITCbyBT-BookOblique /BakerSignet /BankGothicBT-Medium /Barmeno-Bold /Barmeno-ExtraBold /Barmeno-Medium /Barmeno-Regular /Baskerville /BaskervilleBE-Italic /BaskervilleBE-Medium /BaskervilleBE-MediumItalic /BaskervilleBE-Regular /Baskerville-Bold /Baskerville-BoldItalic /Baskerville-Italic /BaskOldFace /Batang /BatangChe /Bauhaus93 /Bellevue /BellMT /BellMTBold /BellMTItalic /BerlingAntiqua-Bold /BerlingAntiqua-BoldItalic /BerlingAntiqua-Italic /BerlingAntiqua-Roman /BerlinSansFB-Bold /BerlinSansFBDemi-Bold /BerlinSansFB-Reg /BernardMT-Condensed /BernhardModernBT-Bold /BernhardModernBT-BoldItalic /BernhardModernBT-Italic /BernhardModernBT-Roman /BiffoMT /BinnerD /BinnerGothic /BlackadderITC-Regular /Blackoak /blex /blsy /Bodoni /Bodoni-Bold /Bodoni-BoldItalic /Bodoni-Italic /BodoniMT /BodoniMTBlack /BodoniMTBlack-Italic /BodoniMT-Bold /BodoniMT-BoldItalic /BodoniMTCondensed /BodoniMTCondensed-Bold /BodoniMTCondensed-BoldItalic /BodoniMTCondensed-Italic /BodoniMT-Italic /BodoniMTPosterCompressed /Bodoni-Poster /Bodoni-PosterCompressed /BookAntiqua /BookAntiqua-Bold /BookAntiqua-BoldItalic /BookAntiqua-Italic /Bookman-Demi /Bookman-DemiItalic /Bookman-Light /Bookman-LightItalic /BookmanOldStyle /BookmanOldStyle-Bold /BookmanOldStyle-BoldItalic /BookmanOldStyle-Italic /BookshelfSymbolOne-Regular /BookshelfSymbolSeven /BookshelfSymbolThree-Regular /BookshelfSymbolTwo-Regular /Botanical /Boton-Italic /Boton-Medium /Boton-MediumItalic /Boton-Regular /Boulevard /BradleyHandITC /Braggadocio /BritannicBold /Broadway /BrowalliaNew /BrowalliaNew-Bold</s>
<s>/BrowalliaNew-BoldItalic /BrowalliaNew-Italic /BrowalliaUPC /BrowalliaUPC-Bold /BrowalliaUPC-BoldItalic /BrowalliaUPC-Italic /BrushScript /BrushScriptMT /CaflischScript-Bold /CaflischScript-Regular /Calibri /Calibri-Bold /Calibri-BoldItalic /Calibri-Italic /CalifornianFB-Bold /CalifornianFB-Italic /CalifornianFB-Reg /CalisMTBol /CalistoMT /CalistoMT-BoldItalic /CalistoMT-Italic /Cambria /Cambria-Bold /Cambria-BoldItalic /Cambria-Italic /CambriaMath /Candara /Candara-Bold /Candara-BoldItalic /Candara-Italic /Carta /CaslonOpenfaceBT-Regular /Castellar /CastellarMT /Centaur /Centaur-Italic /Century /CenturyGothic /CenturyGothic-Bold /CenturyGothic-BoldItalic /CenturyGothic-Italic /CenturySchL-Bold /CenturySchL-BoldItal /CenturySchL-Ital /CenturySchL-Roma /CenturySchoolbook /CenturySchoolbook-Bold /CenturySchoolbook-BoldItalic /CenturySchoolbook-Italic /CGTimes-Bold /CGTimes-BoldItalic /CGTimes-Italic /CGTimes-Regular /CharterBT-Bold /CharterBT-BoldItalic /CharterBT-Italic /CharterBT-Roman /CheltenhamITCbyBT-Bold /CheltenhamITCbyBT-BoldItalic /CheltenhamITCbyBT-Book /CheltenhamITCbyBT-BookItalic /Chiller-Regular /Cmb10 /CMB10 /Cmbsy10 /CMBSY10 /CMBSY5 /CMBSY6 /CMBSY7 /CMBSY8 /CMBSY9 /Cmbx10 /CMBX10 /Cmbx12 /CMBX12 /Cmbx5 /CMBX5 /Cmbx6 /CMBX6 /Cmbx7 /CMBX7 /Cmbx8 /CMBX8 /Cmbx9 /CMBX9 /Cmbxsl10 /CMBXSL10 /Cmbxti10 /CMBXTI10 /Cmcsc10 /CMCSC10 /Cmcsc8 /CMCSC8 /Cmcsc9 /CMCSC9 /Cmdunh10 /CMDUNH10 /Cmex10 /CMEX10 /CMEX7 /CMEX8 /CMEX9 /Cmff10 /CMFF10 /Cmfi10 /CMFI10 /Cmfib8 /CMFIB8 /Cminch /CMINCH /Cmitt10 /CMITT10 /Cmmi10 /CMMI10 /Cmmi12 /CMMI12 /Cmmi5 /CMMI5 /Cmmi6 /CMMI6 /Cmmi7 /CMMI7 /Cmmi8 /CMMI8 /Cmmi9 /CMMI9 /Cmmib10 /CMMIB10 /CMMIB5 /CMMIB6 /CMMIB7 /CMMIB8 /CMMIB9 /Cmr10 /CMR10 /Cmr12 /CMR12 /Cmr17 /CMR17 /Cmr5 /CMR5 /Cmr6 /CMR6 /Cmr7 /CMR7 /Cmr8 /CMR8 /Cmr9 /CMR9 /Cmsl10 /CMSL10 /Cmsl12 /CMSL12 /Cmsl8 /CMSL8 /Cmsl9 /CMSL9 /Cmsltt10 /CMSLTT10 /Cmss10 /CMSS10 /Cmss12 /CMSS12 /Cmss17 /CMSS17 /Cmss8 /CMSS8 /Cmss9 /CMSS9 /Cmssbx10 /CMSSBX10 /Cmssdc10 /CMSSDC10 /Cmssi10 /CMSSI10 /Cmssi12 /CMSSI12 /Cmssi17 /CMSSI17 /Cmssi8 /CMSSI8 /Cmssi9 /CMSSI9 /Cmssq8 /CMSSQ8 /Cmssqi8 /CMSSQI8 /Cmsy10 /CMSY10 /Cmsy5 /CMSY5 /Cmsy6 /CMSY6 /Cmsy7 /CMSY7 /Cmsy8 /CMSY8 /Cmsy9 /CMSY9 /Cmtcsc10 /CMTCSC10 /Cmtex10 /CMTEX10 /Cmtex8 /CMTEX8 /Cmtex9 /CMTEX9 /Cmti10 /CMTI10 /Cmti12 /CMTI12 /Cmti7 /CMTI7 /Cmti8 /CMTI8 /Cmti9 /CMTI9 /Cmtt10 /CMTT10 /Cmtt12 /CMTT12 /Cmtt8 /CMTT8 /Cmtt9 /CMTT9 /Cmu10 /CMU10 /Cmvtt10 /CMVTT10 /ColonnaMT /Colossalis-Bold /ComicSansMS /ComicSansMS-Bold /Consolas /Consolas-Bold /Consolas-BoldItalic /Consolas-Italic /Constantia /Constantia-Bold /Constantia-BoldItalic /Constantia-Italic /CooperBlack /CopperplateGothic-Bold /CopperplateGothic-Light /Copperplate-ThirtyThreeBC /Corbel /Corbel-Bold /Corbel-BoldItalic /Corbel-Italic /CordiaNew /CordiaNew-Bold /CordiaNew-BoldItalic /CordiaNew-Italic /CordiaUPC /CordiaUPC-Bold /CordiaUPC-BoldItalic /CordiaUPC-Italic /Courier /Courier-Bold /Courier-BoldOblique /CourierNewPS-BoldItalicMT /CourierNewPS-BoldMT /CourierNewPS-ItalicMT /CourierNewPSMT /Courier-Oblique /CourierStd /CourierStd-Bold /CourierStd-BoldOblique /CourierStd-Oblique /CourierX-Bold /CourierX-BoldOblique /CourierX-Oblique /CourierX-Regular /CreepyRegular /CurlzMT /David-Bold /David-Reg /DavidTransparent /Dcb10 /Dcbx10 /Dcbxsl10 /Dcbxti10 /Dccsc10 /Dcitt10 /Dcr10 /Desdemona /DilleniaUPC /DilleniaUPCBold /DilleniaUPCBoldItalic /DilleniaUPCItalic /Dingbats /DomCasual /Dotum /DotumChe /DoulosSIL /EdwardianScriptITC /Elephant-Italic /Elephant-Regular /EngraversGothicBT-Regular /EngraversMT /EraserDust /ErasITC-Bold /ErasITC-Demi /ErasITC-Light /ErasITC-Medium /ErieBlackPSMT /ErieLightPSMT /EriePSMT /EstrangeloEdessa /Euclid /Euclid-Bold /Euclid-BoldItalic /EuclidExtra /EuclidExtra-Bold /EuclidFraktur /EuclidFraktur-Bold /Euclid-Italic /EuclidMathOne /EuclidMathOne-Bold /EuclidMathTwo /EuclidMathTwo-Bold /EuclidSymbol /EuclidSymbol-Bold /EuclidSymbol-BoldItalic /EuclidSymbol-Italic /EucrosiaUPC /EucrosiaUPCBold /EucrosiaUPCBoldItalic /EucrosiaUPCItalic /EUEX10 /EUEX7 /EUEX8 /EUEX9 /EUFB10 /EUFB5 /EUFB7 /EUFM10 /EUFM5 /EUFM7 /EURB10 /EURB5 /EURB7 /EURM10 /EURM5 /EURM7 /EuroMono-Bold /EuroMono-BoldItalic /EuroMono-Italic /EuroMono-Regular /EuroSans-Bold /EuroSans-BoldItalic /EuroSans-Italic /EuroSans-Regular /EuroSerif-Bold /EuroSerif-BoldItalic /EuroSerif-Italic /EuroSerif-Regular /EUSB10 /EUSB5 /EUSB7 /EUSM10 /EUSM5 /EUSM7 /FelixTitlingMT /Fences /FencesPlain /FigaroMT /FixedMiriamTransparent /FootlightMTLight /Formata-Italic /Formata-Medium /Formata-MediumItalic /Formata-Regular /ForteMT /FranklinGothic-Book /FranklinGothic-BookItalic /FranklinGothic-Demi /FranklinGothic-DemiCond /FranklinGothic-DemiItalic /FranklinGothic-Heavy /FranklinGothic-HeavyItalic /FranklinGothicITCbyBT-Book /FranklinGothicITCbyBT-BookItal /FranklinGothicITCbyBT-Demi /FranklinGothicITCbyBT-DemiItal /FranklinGothic-Medium /FranklinGothic-MediumCond /FranklinGothic-MediumItalic /FrankRuehl /FreesiaUPC /FreesiaUPCBold /FreesiaUPCBoldItalic /FreesiaUPCItalic /FreestyleScript-Regular /FrenchScriptMT /Frutiger-Black /Frutiger-BlackCn /Frutiger-BlackItalic /Frutiger-Bold /Frutiger-BoldCn /Frutiger-BoldItalic /Frutiger-Cn /Frutiger-ExtraBlackCn /Frutiger-Italic /Frutiger-Light /Frutiger-LightCn /Frutiger-LightItalic /Frutiger-Roman /Frutiger-UltraBlack /Futura-Bold /Futura-BoldOblique /Futura-Book /Futura-BookOblique /FuturaBT-Bold /FuturaBT-BoldItalic /FuturaBT-Book /FuturaBT-BookItalic /FuturaBT-Medium /FuturaBT-MediumItalic /Futura-Light /Futura-LightOblique /GalliardITCbyBT-Bold /GalliardITCbyBT-BoldItalic /GalliardITCbyBT-Italic /GalliardITCbyBT-Roman /Garamond /Garamond-Bold /Garamond-BoldCondensed /Garamond-BoldCondensedItalic /Garamond-BoldItalic /Garamond-BookCondensed /Garamond-BookCondensedItalic /Garamond-Italic /Garamond-LightCondensed /Garamond-LightCondensedItalic /Gautami /GeometricSlab703BT-Light /GeometricSlab703BT-LightItalic /Georgia /Georgia-Bold /Georgia-BoldItalic /Georgia-Italic /GeorgiaRef /Giddyup /Giddyup-Thangs /Gigi-Regular /GillSans /GillSans-Bold /GillSans-BoldItalic /GillSans-Condensed /GillSans-CondensedBold /GillSans-Italic /GillSans-Light /GillSans-LightItalic /GillSansMT /GillSansMT-Bold /GillSansMT-BoldItalic /GillSansMT-Condensed /GillSansMT-ExtraCondensedBold /GillSansMT-Italic /GillSans-UltraBold /GillSans-UltraBoldCondensed /GloucesterMT-ExtraCondensed /Gothic-Thirteen /GoudyOldStyleBT-Bold /GoudyOldStyleBT-BoldItalic /GoudyOldStyleBT-Italic /GoudyOldStyleBT-Roman /GoudyOldStyleT-Bold /GoudyOldStyleT-Italic /GoudyOldStyleT-Regular /GoudyStout /GoudyTextMT-LombardicCapitals /GSIDefaultSymbols /Gulim /GulimChe /Gungsuh /GungsuhChe /Haettenschweiler /HarlowSolid /Harrington /Helvetica /Helvetica-Black /Helvetica-BlackOblique /Helvetica-Bold /Helvetica-BoldOblique /Helvetica-Condensed /Helvetica-Condensed-Black /Helvetica-Condensed-BlackObl /Helvetica-Condensed-Bold /Helvetica-Condensed-BoldObl /Helvetica-Condensed-Light /Helvetica-Condensed-LightObl /Helvetica-Condensed-Oblique /Helvetica-Fraction /Helvetica-Narrow /Helvetica-Narrow-Bold /Helvetica-Narrow-BoldOblique /Helvetica-Narrow-Oblique /Helvetica-Oblique</s>
<s>/HighTowerText-Italic /HighTowerText-Reg /Humanist521BT-BoldCondensed /Humanist521BT-Light /Humanist521BT-LightItalic /Humanist521BT-RomanCondensed /Imago-ExtraBold /Impact /ImprintMT-Shadow /InformalRoman-Regular /IrisUPC /IrisUPCBold /IrisUPCBoldItalic /IrisUPCItalic /Ironwood /ItcEras-Medium /ItcKabel-Bold /ItcKabel-Book /ItcKabel-Demi /ItcKabel-Medium /ItcKabel-Ultra /JasmineUPC /JasmineUPC-Bold /JasmineUPC-BoldItalic /JasmineUPC-Italic /JoannaMT /JoannaMT-Italic /Jokerman-Regular /JuiceITC-Regular /Kartika /Kaufmann /KaufmannBT-Bold /KaufmannBT-Regular /KidTYPEPaint /KinoMT /KodchiangUPC /KodchiangUPC-Bold /KodchiangUPC-BoldItalic /KodchiangUPC-Italic /KorinnaITCbyBT-Regular /KristenITC-Regular /KrutiDev040Bold /KrutiDev040BoldItalic /KrutiDev040Condensed /KrutiDev040Italic /KrutiDev040Thin /KrutiDev040Wide /KrutiDev060 /KrutiDev060Bold /KrutiDev060BoldItalic /KrutiDev060Condensed /KrutiDev060Italic /KrutiDev060Thin /KrutiDev060Wide /KrutiDev070 /KrutiDev070Condensed /KrutiDev070Italic /KrutiDev070Thin /KrutiDev070Wide /KrutiDev080 /KrutiDev080Condensed /KrutiDev080Italic /KrutiDev080Wide /KrutiDev090 /KrutiDev090Bold /KrutiDev090BoldItalic /KrutiDev090Condensed /KrutiDev090Italic /KrutiDev090Thin /KrutiDev090Wide /KrutiDev100 /KrutiDev100Bold /KrutiDev100BoldItalic /KrutiDev100Condensed /KrutiDev100Italic /KrutiDev100Thin /KrutiDev100Wide /KrutiDev120 /KrutiDev120Condensed /KrutiDev120Thin /KrutiDev120Wide /KrutiDev130 /KrutiDev130Condensed /KrutiDev130Thin /KrutiDev130Wide /KunstlerScript /Latha /LatinWide /LetterGothic /LetterGothic-Bold /LetterGothic-BoldOblique /LetterGothic-BoldSlanted /LetterGothicMT /LetterGothicMT-Bold /LetterGothicMT-BoldOblique /LetterGothicMT-Oblique /LetterGothic-Slanted /LevenimMT /LevenimMTBold /LilyUPC /LilyUPCBold /LilyUPCBoldItalic /LilyUPCItalic /Lithos-Black /Lithos-Regular /LotusWPBox-Roman /LotusWPIcon-Roman /LotusWPIntA-Roman /LotusWPIntB-Roman /LotusWPType-Roman /LucidaBright /LucidaBright-Demi /LucidaBright-DemiItalic /LucidaBright-Italic /LucidaCalligraphy-Italic /LucidaConsole /LucidaFax /LucidaFax-Demi /LucidaFax-DemiItalic /LucidaFax-Italic /LucidaHandwriting-Italic /LucidaSans /LucidaSans-Demi /LucidaSans-DemiItalic /LucidaSans-Italic /LucidaSans-Typewriter /LucidaSans-TypewriterBold /LucidaSans-TypewriterBoldOblique /LucidaSans-TypewriterOblique /LucidaSansUnicode /Lydian /Magneto-Bold /MaiandraGD-Regular /Mangal-Regular /Map-Symbols /MathA /MathB /MathC /Mathematica1 /Mathematica1-Bold /Mathematica1Mono /Mathematica1Mono-Bold /Mathematica2 /Mathematica2-Bold /Mathematica2Mono /Mathematica2Mono-Bold /Mathematica3 /Mathematica3-Bold /Mathematica3Mono /Mathematica3Mono-Bold /Mathematica4 /Mathematica4-Bold /Mathematica4Mono /Mathematica4Mono-Bold /Mathematica5 /Mathematica5-Bold /Mathematica5Mono /Mathematica5Mono-Bold /Mathematica6 /Mathematica6Bold /Mathematica6Mono /Mathematica6MonoBold /Mathematica7 /Mathematica7Bold /Mathematica7Mono /Mathematica7MonoBold /MatisseITC-Regular /MaturaMTScriptCapitals /Mesquite /Mezz-Black /Mezz-Regular /MICR /MicrosoftSansSerif /MingLiU /Minion-BoldCondensed /Minion-BoldCondensedItalic /Minion-Condensed /Minion-CondensedItalic /Minion-Ornaments /MinionPro-Bold /MinionPro-BoldIt /MinionPro-It /MinionPro-Regular /Miriam /MiriamFixed /MiriamTransparent /Mistral /Modern-Regular /MonotypeCorsiva /MonotypeSorts /MSAM10 /MSAM5 /MSAM6 /MSAM7 /MSAM8 /MSAM9 /MSBM10 /MSBM5 /MSBM6 /MSBM7 /MSBM8 /MSBM9 /MS-Gothic /MSHei /MSLineDrawPSMT /MS-Mincho /MSOutlook /MS-PGothic /MS-PMincho /MSReference1 /MSReference2 /MSReferenceSansSerif /MSReferenceSansSerif-Bold /MSReferenceSansSerif-BoldItalic /MSReferenceSansSerif-Italic /MSReferenceSerif /MSReferenceSerif-Bold /MSReferenceSerif-BoldItalic /MSReferenceSerif-Italic /MSReferenceSpecialty /MSSong /MS-UIGothic /MT-Extra /MTExtraTiger /MT-Symbol /MT-Symbol-Italic /MVBoli /Myriad-Bold /Myriad-BoldItalic /Myriad-Italic /Myriad-Roman /Narkisim /NewCenturySchlbk-Bold /NewCenturySchlbk-BoldItalic /NewCenturySchlbk-Italic /NewCenturySchlbk-Roman /NewMilleniumSchlbk-BoldItalicSH /NewsGothic /NewsGothic-Bold /NewsGothicBT-Bold /NewsGothicBT-BoldItalic /NewsGothicBT-Italic /NewsGothicBT-Roman /NewsGothic-Condensed /NewsGothic-Italic /NewsGothicMT /NewsGothicMT-Bold /NewsGothicMT-Italic /NiagaraEngraved-Reg /NiagaraSolid-Reg /NimbusMonL-Bold /NimbusMonL-BoldObli /NimbusMonL-Regu /NimbusMonL-ReguObli /NimbusRomNo9L-Medi /NimbusRomNo9L-MediItal /NimbusRomNo9L-Regu /NimbusRomNo9L-ReguItal /NimbusSanL-Bold /NimbusSanL-BoldCond /NimbusSanL-BoldCondItal /NimbusSanL-BoldItal /NimbusSanL-Regu /NimbusSanL-ReguCond /NimbusSanL-ReguCondItal /NimbusSanL-ReguItal /Nimrod /Nimrod-Bold /Nimrod-BoldItalic /Nimrod-Italic /NSimSun /Nueva-BoldExtended /Nueva-BoldExtendedItalic /Nueva-Italic /Nueva-Roman /NuptialScript /OCRA /OCRA-Alternate /OCRAExtended /OCRB /OCRB-Alternate /OfficinaSans-Bold /OfficinaSans-BoldItalic /OfficinaSans-Book /OfficinaSans-BookItalic /OfficinaSerif-Bold /OfficinaSerif-BoldItalic /OfficinaSerif-Book /OfficinaSerif-BookItalic /OldEnglishTextMT /Onyx /OnyxBT-Regular /OzHandicraftBT-Roman /PalaceScriptMT /Palatino-Bold /Palatino-BoldItalic /Palatino-Italic /PalatinoLinotype-Bold /PalatinoLinotype-BoldItalic /PalatinoLinotype-Italic /PalatinoLinotype-Roman /Palatino-Roman /PapyrusPlain /Papyrus-Regular /Parchment-Regular /Parisian /ParkAvenue /Penumbra-SemiboldFlare /Penumbra-SemiboldSans /Penumbra-SemiboldSerif /PepitaMT /Perpetua /Perpetua-Bold /Perpetua-BoldItalic /Perpetua-Italic /PerpetuaTitlingMT-Bold /PerpetuaTitlingMT-Light /PhotinaCasualBlack /Playbill /PMingLiU /Poetica-SuppOrnaments /PoorRichard-Regular /PopplLaudatio-Italic /PopplLaudatio-Medium /PopplLaudatio-MediumItalic /PopplLaudatio-Regular /PrestigeElite /Pristina-Regular /PTBarnumBT-Regular /Raavi /RageItalic /Ravie /RefSpecialty /Ribbon131BT-Bold /Rockwell /Rockwell-Bold /Rockwell-BoldItalic /Rockwell-Condensed /Rockwell-CondensedBold /Rockwell-ExtraBold /Rockwell-Italic /Rockwell-Light /Rockwell-LightItalic /Rod /RodTransparent /RunicMT-Condensed /Sanvito-Light /Sanvito-Roman /ScriptC /ScriptMTBold /SegoeUI /SegoeUI-Bold /SegoeUI-BoldItalic /SegoeUI-Italic /Serpentine-BoldOblique /ShelleyVolanteBT-Regular /ShowcardGothic-Reg /Shruti /SILDoulosIPA /SimHei /SimSun /SimSun-PUA /SnapITC-Regular /StandardSymL /Stencil /StoneSans /StoneSans-Bold /StoneSans-BoldItalic /StoneSans-Italic /StoneSans-Semibold /StoneSans-SemiboldItalic /Stop /Swiss721BT-BlackExtended /Sylfaen /Symbol /SymbolMT /SymbolTiger /SymbolTigerExpert /Tahoma /Tahoma-Bold /Tci1 /Tci1Bold /Tci1BoldItalic /Tci1Italic /Tci2 /Tci2Bold /Tci2BoldItalic /Tci2Italic /Tci3 /Tci3Bold /Tci3BoldItalic /Tci3Italic /Tci4 /Tci4Bold /Tci4BoldItalic /Tci4Italic /TechnicalItalic /TechnicalPlain /Tekton /Tekton-Bold /TektonMM /Tempo-HeavyCondensed /Tempo-HeavyCondensedItalic /TempusSansITC /Tiger /TigerExpert /Times-Bold /Times-BoldItalic /Times-BoldItalicOsF /Times-BoldSC /Times-ExtraBold /Times-Italic /Times-ItalicOsF /TimesNewRomanMT-ExtraBold /TimesNewRomanPS-BoldItalicMT /TimesNewRomanPS-BoldMT /TimesNewRomanPS-ItalicMT /TimesNewRomanPSMT /Times-Roman /Times-RomanSC /Trajan-Bold /Trebuchet-BoldItalic /TrebuchetMS /TrebuchetMS-Bold /TrebuchetMS-Italic /Tunga-Regular /TwCenMT-Bold /TwCenMT-BoldItalic /TwCenMT-Condensed /TwCenMT-CondensedBold /TwCenMT-CondensedExtraBold /TwCenMT-CondensedMedium /TwCenMT-Italic /TwCenMT-Regular /Univers-Bold /Univers-BoldItalic /UniversCondensed-Bold /UniversCondensed-BoldItalic /UniversCondensed-Medium /UniversCondensed-MediumItalic /Univers-Medium /Univers-MediumItalic /URWBookmanL-DemiBold /URWBookmanL-DemiBoldItal /URWBookmanL-Ligh /URWBookmanL-LighItal /URWChanceryL-MediItal /URWGothicL-Book /URWGothicL-BookObli /URWGothicL-Demi /URWGothicL-DemiObli /URWPalladioL-Bold /URWPalladioL-BoldItal /URWPalladioL-Ital /URWPalladioL-Roma /USPSBarCode /VAGRounded-Black /VAGRounded-Bold /VAGRounded-Light /VAGRounded-Thin /Verdana /Verdana-Bold /Verdana-BoldItalic /Verdana-Italic /VerdanaRef /VinerHandITC /Viva-BoldExtraExtended /Vivaldii /Viva-LightCondensed /Viva-Regular /VladimirScript /Vrinda /Webdings /Westminster /Willow /Wingdings2 /Wingdings3 /Wingdings-Regular /WNCYB10 /WNCYI10 /WNCYR10 /WNCYSC10 /WNCYSS10 /WoodtypeOrnaments-One /WoodtypeOrnaments-Two /WP-ArabicScriptSihafa /WP-ArabicSihafa /WP-BoxDrawing /WP-CyrillicA /WP-CyrillicB /WP-GreekCentury /WP-GreekCourier /WP-GreekHelve /WP-HebrewDavid /WP-IconicSymbolsA /WP-IconicSymbolsB /WP-Japanese /WP-MathA /WP-MathB /WP-MathExtendedA</s>
<s>/WP-MathExtendedB /WP-MultinationalAHelve /WP-MultinationalARoman /WP-MultinationalBCourier /WP-MultinationalBHelve /WP-MultinationalBRoman /WP-MultinationalCourier /WP-Phonetic /WPTypographicSymbols /XYATIP10 /XYBSQL10 /XYBTIP10 /XYCIRC10 /XYCMAT10 /XYCMBT10 /XYDASH10 /XYEUAT10 /XYEUBT10 /ZapfChancery-MediumItalic /ZapfDingbats /ZapfHumanist601BT-Bold /ZapfHumanist601BT-BoldItalic /ZapfHumanist601BT-Demi /ZapfHumanist601BT-DemiItalic /ZapfHumanist601BT-Italic /ZapfHumanist601BT-Roman /ZWAdobeF /NeverEmbed [ true /AntiAliasColorImages false /CropColorImages true /ColorImageMinResolution 150 /ColorImageMinResolutionPolicy /OK /DownsampleColorImages true /ColorImageDownsampleType /Bicubic /ColorImageResolution 300 /ColorImageDepth -1 /ColorImageMinDownsampleDepth 1 /ColorImageDownsampleThreshold 2.00333 /EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages true /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /ColorImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 150 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 2.00333 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /GrayImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 1200 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 600 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.00167 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1 /AllowPSXObjects false /CheckCompliance [ /None /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXOutputIntentProfile (None) /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False /CreateJDFFile false /Description << /ARA <FEFF06270633062A062E062F0645002006470630064700200627064406250639062F0627062F0627062A002006440625064606340627062100200648062B062706260642002000410064006F00620065002000500044004600200645062A064806270641064206290020064506390020064506420627064A064A0633002006390631063600200648063706280627063906290020062706440648062B0627062606420020062706440645062A062F062706480644062900200641064A00200645062C062706440627062A002006270644062306390645062706440020062706440645062E062A064406410629061B0020064A06450643064600200641062A062D00200648062B0627062606420020005000440046002006270644064506460634062306290020062806270633062A062E062F062706450020004100630072006F0062006100740020064800410064006F006200650020005200650061006400650072002006250635062F0627063100200035002E0030002006480627064406250635062F062706310627062A0020062706440623062D062F062B002E> /CHS <FEFF4f7f75288fd94e9b8bbe5b9a521b5efa7684002000410064006f006200650020005000440046002065876863900275284e8e55464e1a65876863768467e5770b548c62535370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c676562535f00521b5efa768400200050004400460020658768633002> /CHT <FEFF4f7f752890194e9b8a2d7f6e5efa7acb7684002000410064006f006200650020005000440046002065874ef69069752865bc666e901a554652d965874ef6768467e5770b548c52175370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c4f86958b555f5df25efa7acb76840020005000440046002065874ef63002> /CZE <FEFF005400610074006f0020006e006100730074006100760065006e00ed00200070006f0075017e0069006a007400650020006b0020007600790074007600e101590065006e00ed00200064006f006b0075006d0065006e0074016f002000410064006f006200650020005000440046002000760068006f0064006e00fd00630068002000700072006f002000730070006f006c00650068006c0069007600e90020007a006f006200720061007a006f007600e1006e00ed002000610020007400690073006b0020006f006200630068006f0064006e00ed0063006800200064006f006b0075006d0065006e0074016f002e002000200056007900740076006f01590065006e00e900200064006f006b0075006d0065006e007400790020005000440046002000620075006400650020006d006f017e006e00e90020006f007400650076015900ed007400200076002000700072006f006700720061006d0065006300680020004100630072006f00620061007400200061002000410064006f00620065002000520065006100640065007200200035002e0030002000610020006e006f0076011b006a016100ed00630068002e> /DAN <FEFF004200720075006700200069006e0064007300740069006c006c0069006e006700650072006e0065002000740069006c0020006100740020006f007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400650072002c0020006400650072002000650067006e006500720020007300690067002000740069006c00200064006500740061006c006a006500720065007400200073006b00e60072006d007600690073006e0069006e00670020006f00670020007500640073006b007200690076006e0069006e006700200061006600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020004400650020006f007000720065007400740065006400650020005000440046002d0064006f006b0075006d0065006e0074006500720020006b0061006e002000e50062006e00650073002000690020004100630072006f00620061007400200065006c006c006500720020004100630072006f006200610074002000520065006100640065007200200035002e00300020006f00670020006e0079006500720065002e> /DEU <FEFF00560065007200770065006e00640065006e0020005300690065002000640069006500730065002000450069006e007300740065006c006c0075006e00670065006e0020007a0075006d002000450072007300740065006c006c0065006e00200076006f006e002000410064006f006200650020005000440046002d0044006f006b0075006d0065006e00740065006e002c00200075006d002000650069006e00650020007a0075007600650072006c00e40073007300690067006500200041006e007a006500690067006500200075006e00640020004100750073006700610062006500200076006f006e00200047006500730063006800e40066007400730064006f006b0075006d0065006e00740065006e0020007a0075002000650072007a00690065006c0065006e002e00200044006900650020005000440046002d0044006f006b0075006d0065006e007400650020006b00f6006e006e0065006e0020006d006900740020004100630072006f00620061007400200075006e0064002000520065006100640065007200200035002e003000200075006e00640020006800f600680065007200200067006500f600660066006e00650074002000770065007200640065006e002e> /ESP <FEFF005500740069006c0069006300650020006500730074006100200063006f006e0066006900670075007200610063006900f3006e0020007000610072006100200063007200650061007200200064006f00630075006d0065006e0074006f0073002000640065002000410064006f00620065002000500044004600200061006400650063007500610064006f007300200070006100720061002000760069007300750061006c0069007a00610063006900f3006e0020006500200069006d0070007200650073006900f3006e00200064006500200063006f006e006600690061006e007a006100200064006500200064006f00630075006d0065006e0074006f007300200063006f006d00650072006300690061006c00650073002e002000530065002000700075006500640065006e00200061006200720069007200200064006f00630075006d0065006e0074006f00730020005000440046002000630072006500610064006f007300200063006f006e0020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e003000200079002000760065007200730069006f006e0065007300200070006f00730074006500720069006f007200650073002e> /FRA <FEFF005500740069006c006900730065007a00200063006500730020006f007000740069006f006e00730020006100660069006e00200064006500200063007200e900650072002000640065007300200064006f00630075006d0065006e00740073002000410064006f006200650020005000440046002000700072006f00660065007300730069006f006e006e0065006c007300200066006900610062006c0065007300200070006f007500720020006c0061002000760069007300750061006c00690073006100740069006f006e0020006500740020006c00270069006d007000720065007300730069006f006e002e0020004c0065007300200064006f00630075006d0065006e00740073002000500044004600200063007200e900e90073002000700065007500760065006e0074002000ea0074007200650020006f007500760065007200740073002000640061006e00730020004100630072006f006200610074002c002000610069006e00730069002000710075002700410064006f00620065002000520065006100640065007200200035002e0030002000650074002000760065007200730069006f006e007300200075006c007400e90072006900650075007200650073002e> /GRE <FEFF03a703c103b703c303b903bc03bf03c003bf03b903ae03c303c403b5002003b103c503c403ad03c2002003c403b903c2002003c103c503b803bc03af03c303b503b903c2002003b303b903b1002003bd03b1002003b403b703bc03b903bf03c503c103b303ae03c303b503c403b5002003ad03b303b303c103b103c603b1002000410064006f006200650020005000440046002003ba03b103c403ac03bb03bb03b703bb03b1002003b303b903b1002003b103be03b903cc03c003b903c303c403b7002003c003c103bf03b203bf03bb03ae002003ba03b103b9002003b503ba03c403cd03c003c903c303b7002003b503c003b903c703b503b903c103b703bc03b103c403b903ba03ce03bd002003b503b303b303c103ac03c603c903bd002e0020002003a403b10020005000440046002003ad03b303b303c103b103c603b1002003c003bf03c5002003ad03c703b503c403b5002003b403b703bc03b903bf03c503c103b303ae03c303b503b9002003bc03c003bf03c103bf03cd03bd002003bd03b1002003b103bd03bf03b903c703c403bf03cd03bd002003bc03b5002003c403bf0020004100630072006f006200610074002c002003c403bf002000410064006f00620065002000520065006100640065007200200035002e0030002003ba03b103b9002003bc03b503c403b103b303b503bd03ad03c303c403b503c103b503c2002003b503ba03b403cc03c303b503b903c2002e> /HEB <FEFF05D405E905EA05DE05E905D5002005D105D405D205D305E805D505EA002005D005DC05D4002005DB05D305D9002005DC05D905E605D505E8002005DE05E105DE05DB05D9002000410064006F006200650020005000440046002005E205D105D505E8002005D405E605D205D4002005D505D405D305E405E105D4002005D005DE05D905E005D4002005E905DC002005DE05E105DE05DB05D905DD002005E205E105E705D905D905DD002E002005DE05E105DE05DB05D90020005000440046002005E905E005D505E605E805D5002005E005D905EA05E005D905DD002005DC05E405EA05D905D705D4002005D105D005DE05E605E205D505EA0020004100630072006F006200610074002005D5002D00410064006F00620065002000520065006100640065007200200035002E0030002005D505D205E805E105D005D505EA002005DE05EA05E705D305DE05D505EA002005D905D505EA05E8002E05D905D505EA05E8002E002D0033002C002005E205D905D905E005D5002005D105DE05D305E805D905DA002005DC05DE05E905EA05DE05E9002005E905DC0020004100630072006F006200610074002E002005DE05E105DE05DB05D90020005000440046002005E905E005D505E605E805D5002005E005D905EA05E005D905DD002005DC05E405EA05D905D705D4002005D105D005DE05E605E205D505EA0020004100630072006F006200610074002005D5002D00410064006F00620065002000520065006100640065007200200035002E0030002005D505D205E805E105D005D505EA002005DE05EA05E705D305DE05D505EA002005D905D505EA05E8002E> /HRV (Za stvaranje Adobe PDF dokumenata pogodnih za pouzdani prikaz i ispis poslovnih dokumenata koristite ove postavke. Stvoreni PDF dokumenti mogu se otvoriti Acrobat i Adobe Reader 5.0 i kasnijim verzijama.) /HUN <FEFF00410020006800690076006100740061006c006f007300200064006f006b0075006d0065006e00740075006d006f006b0020006d00650067006200ed007a00680061007400f30020006d0065006700740065006b0069006e007400e9007300e900720065002000e900730020006e0079006f006d00740061007400e1007300e10072006100200073007a00e1006e0074002000410064006f00620065002000500044004600200064006f006b0075006d0065006e00740075006d006f006b0061007400200065007a0065006b006b0065006c0020006100200062006500e1006c006c00ed007400e10073006f006b006b0061006c00200068006f007a006800610074006a00610020006c00e9007400720065002e0020002000410020006c00e90074007200650068006f007a006f00740074002000500044004600200064006f006b0075006d0065006e00740075006d006f006b00200061007a0020004100630072006f006200610074002000e9007300200061007a002000410064006f00620065002000520065006100640065007200200035002e0030002c0020007600610067007900200061007a002000610074007400f3006c0020006b00e9007301510062006200690020007600650072007a006900f3006b006b0061006c0020006e00790069007400680061007400f3006b0020006d00650067002e> /ITA (Utilizzare queste impostazioni per creare documenti Adobe PDF adatti per visualizzare e stampare documenti aziendali in modo affidabile. I documenti PDF creati possono essere aperti con Acrobat e Adobe Reader 5.0 e versioni successive.) /JPN <FEFF30d330b830cd30b9658766f8306e8868793a304a3088307353705237306b90693057305f002000410064006f0062006500200050004400460020658766f8306e4f5c6210306b4f7f75283057307e305930023053306e8a2d5b9a30674f5c62103055308c305f0020005000440046002030d530a130a430eb306f3001004100630072006f0062006100740020304a30883073002000410064006f00620065002000520065006100640065007200200035002e003000204ee5964d3067958b304f30533068304c3067304d307e305930023053306e8a2d5b9a3067306f30d530a930f330c8306e57cb30818fbc307f3092884c3044307e30593002> /KOR <FEFFc7740020c124c815c7440020c0acc6a9d558c5ec0020be44c988b2c8c2a40020bb38c11cb97c0020c548c815c801c73cb85c0020bcf4ace00020c778c1c4d558b2940020b3700020ac00c7a50020c801d569d55c002000410064006f0062006500200050004400460020bb38c11cb97c0020c791c131d569b2c8b2e4002e0020c774b807ac8c0020c791c131b41c00200050004400460020bb38c11cb2940020004100630072006f0062006100740020bc0f002000410064006f00620065002000520065006100640065007200200035002e00300020c774c0c1c5d0c11c0020c5f40020c2180020c788c2b5b2c8b2e4002e> /NLD (Gebruik deze instellingen om Adobe PDF-documenten te maken waarmee zakelijke documenten betrouwbaar kunnen worden weergegeven en afgedrukt. De gemaakte PDF-documenten kunnen worden geopend met Acrobat en Adobe Reader 5.0 en hoger.) /NOR <FEFF004200720075006b00200064006900730073006500200069006e006e007300740069006c006c0069006e00670065006e0065002000740069006c002000e50020006f0070007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e00740065007200200073006f006d002000650072002000650067006e0065007400200066006f00720020007000e5006c006900740065006c006900670020007600690073006e0069006e00670020006f00670020007500740073006b007200690066007400200061007600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020005000440046002d0064006f006b0075006d0065006e00740065006e00650020006b0061006e002000e50070006e00650073002000690020004100630072006f00620061007400200065006c006c00650072002000410064006f00620065002000520065006100640065007200200035002e003000200065006c006c00650072002e> /POL <FEFF0055007300740061007700690065006e0069006100200064006f002000740077006f0072007a0065006e0069006100200064006f006b0075006d0065006e007400f300770020005000440046002000700072007a0065007a006e00610063007a006f006e00790063006800200064006f0020006e00690065007a00610077006f0064006e00650067006f002000770079015b0077006900650074006c0061006e00690061002000690020006400720075006b006f00770061006e0069006100200064006f006b0075006d0065006e007400f300770020006600690072006d006f0077007900630068002e002000200044006f006b0075006d0065006e0074007900200050004400460020006d006f017c006e00610020006f007400770069006500720061010700200077002000700072006f006700720061006d006900650020004100630072006f00620061007400200069002000410064006f00620065002000520065006100640065007200200035002e0030002000690020006e006f00770073007a0079006d002e> /PTB <FEFF005500740069006c0069007a006500200065007300730061007300200063006f006e00660069006700750072006100e700f50065007300200064006500200066006f0072006d00610020006100200063007200690061007200200064006f00630075006d0065006e0074006f0073002000410064006f00620065002000500044004600200061006400650071007500610064006f00730020007000610072006100200061002000760069007300750061006c0069007a006100e700e3006f002000650020006100200069006d0070007200650073007300e3006f00200063006f006e0066006900e1007600650069007300200064006500200064006f00630075006d0065006e0074006f007300200063006f006d0065007200630069006100690073002e0020004f007300200064006f00630075006d0065006e0074006f00730020005000440046002000630072006900610064006f007300200070006f00640065006d0020007300650072002000610062006500720074006f007300200063006f006d0020006f0020004100630072006f006200610074002000650020006f002000410064006f00620065002000520065006100640065007200200035002e0030002000650020007600650072007300f50065007300200070006f00730074006500720069006f007200650073002e> /RUM <FEFF005500740069006c0069007a00610163006900200061006300650073007400650020007300650074010300720069002000700065006e007400720075002000610020006300720065006100200064006f00630075006d0065006e00740065002000410064006f006200650020005000440046002000610064006500630076006100740065002000700065006e007400720075002000760069007a00750061006c0069007a00610072006500610020015f006900200074006900700103007200690072006500610020006c0061002000630061006c006900740061007400650020007300750070006500720069006f0061007201030020006100200064006f00630075006d0065006e00740065006c006f007200200064006500200061006600610063006500720069002e002000200044006f00630075006d0065006e00740065006c00650020005000440046002000630072006500610074006500200070006f00740020006600690020006400650073006300680069007300650020006300750020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e00300020015f00690020007600650072007300690075006e0069006c006500200075006c0074006500720069006f006100720065002e> /RUS <FEFF04180441043f043e043b044c04370443043904420435002004340430043d043d044b04350020043d0430044104420440043e0439043a043800200434043b044f00200441043e043704340430043d0438044f00200434043e043a0443043c0435043d0442043e0432002000410064006f006200650020005000440046002c0020043f043e04340445043e0434044f04490438044500200434043b044f0020043d0430043404350436043d043e0433043e0020043f0440043e0441043c043e044204400430002004380020043f04350447043004420438002004340435043b043e0432044b044500200434043e043a0443043c0435043d0442043e0432002e002000200421043e043704340430043d043d044b04350020005000440046002d0434043e043a0443043c0435043d0442044b0020043c043e0436043d043e0020043e0442043a0440044b043204300442044c002004410020043f043e043c043e0449044c044e0020004100630072006f00620061007400200438002000410064006f00620065002000520065006100640065007200200035002e00300020043800200431043e043b043504350020043f043e04370434043d043804450020043204350440044104380439002e> /SLV <FEFF005400650020006e006100730074006100760069007400760065002000750070006f0072006100620069007400650020007a00610020007500730074007600610072006a0061006e006a006500200064006f006b0075006d0065006e0074006f0076002000410064006f006200650020005000440046002c0020007000720069006d00650072006e006900680020007a00610020007a0061006e00650073006c006a00690076006f0020006f0067006c00650064006f00760061006e006a006500200069006e0020007400690073006b0061006e006a006500200070006f0073006c006f0076006e0069006800200064006f006b0075006d0065006e0074006f0076002e00200020005500730074007600610072006a0065006e006500200064006f006b0075006d0065006e0074006500200050004400460020006a00650020006d006f0067006f010d00650020006f0064007000720065007400690020007a0020004100630072006f00620061007400200069006e002000410064006f00620065002000520065006100640065007200200035002e003000200069006e0020006e006f00760065006a01610069006d002e> /SUO <FEFF004b00e40079007400e40020006e00e40069007400e4002000610073006500740075006b007300690061002c0020006b0075006e0020006c0075006f0074002000410064006f0062006500200050004400460020002d0064006f006b0075006d0065006e007400740065006a0061002c0020006a006f0074006b006100200073006f0070006900760061007400200079007200690074007900730061007300690061006b00690072006a006f006a0065006e0020006c0075006f00740065007400740061007600610061006e0020006e00e400790074007400e4006d0069007300650065006e0020006a0061002000740075006c006f007300740061006d0069007300650065006e002e0020004c0075006f0064007500740020005000440046002d0064006f006b0075006d0065006e00740069007400200076006f0069006400610061006e0020006100760061007400610020004100630072006f0062006100740069006c006c00610020006a0061002000410064006f00620065002000520065006100640065007200200035002e0030003a006c006c00610020006a006100200075007500640065006d006d0069006c006c0061002e> /SVE <FEFF0041006e007600e4006e00640020006400650020006800e4007200200069006e0073007400e4006c006c006e0069006e006700610072006e00610020006f006d002000640075002000760069006c006c00200073006b006100700061002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400200073006f006d00200070006100730073006100720020006600f60072002000740069006c006c006600f60072006c00690074006c006900670020007600690073006e0069006e00670020006f006300680020007500740073006b007200690066007400650072002000610076002000610066006600e4007200730064006f006b0075006d0065006e0074002e002000200053006b006100700061006400650020005000440046002d0064006f006b0075006d0065006e00740020006b0061006e002000f600700070006e00610073002000690020004100630072006f0062006100740020006f00630068002000410064006f00620065002000520065006100640065007200200035002e00300020006f00630068002000730065006e006100720065002e> /TUR <FEFF005400690063006100720069002000620065006c00670065006c006500720069006e0020006700fc00760065006e0069006c0069007200200062006900720020015f0065006b0069006c006400650020006700f6007200fc006e007400fc006c0065006e006d006500730069002000760065002000790061007a0064013100720131006c006d006100730131006e006100200075007900670075006e002000410064006f006200650020005000440046002000620065006c00670065006c0065007200690020006f006c0075015f007400750072006d0061006b0020006900e70069006e00200062007500200061007900610072006c0061007201310020006b0075006c006c0061006e0131006e002e00200020004f006c0075015f0074007500720075006c0061006e0020005000440046002000620065006c00670065006c0065007200690020004100630072006f006200610074002000760065002000410064006f00620065002000520065006100640065007200200035002e003000200076006500200073006f006e0072006100730131006e00640061006b00690020007300fc007200fc006d006c00650072006c00650020006100e70131006c006100620069006c00690072002e> /ENU (Use these settings to create Adobe PDF documents suitable for reliable viewing and printing of business documents. Created PDF documents can be opened with Acrobat and Adobe Reader 5.0 and later.)>> setdistillerparams /HWResolution [600 600] /PageSize [612.000 792.000]>> setpagedevice</s>
<s>Vector Representation of Bengali Word Using Various Word Embedding ModelProceedings of the SMART–2019, IEEE Conference ID: 46866 8th International Conference on System Modeling & Advancement in Research Trends, 22nd–23rd November, 2019 College of Computing Sciences & Information Technology, Teerthanker Mahaveer University, Moradabad, IndiaCopyright © IEEE–2019 ISBN: 978-1-7281-3245-7 27Abstract—To transfer human understanding of language to a machine we need word embedding. Skip gram, CBOW, and fastText is a model which generate word embedding. But finding pretrained word embedding model for the Bengali language is difficult for researchers. Also, training word embedding is time-consuming. In this paper, we discussed different word embedding models. To train those models, we have collected around 500000 Bengali articles from various sources on the internet. Among them, we randomly chose 105000 articles. Those articles have 32 million words. We trained them on Skip Gram and CBOW model of Word2Vec, fastText. We also trained those words in Glove model. Among the all result fastText (Word2Vec) gave us a satisfactory result.Keywords— Bengali Words, Skip Gram, CBOW, Word2Vec, FastText, Glove, Word EmbeddingI. IntroductionWe have human can understand words by their context or surrounding words. In communication, we share thoughts, ideas with each other through language. We can produce an infinite number of sentences with a finite number of words. As we can produce an infinite number of sentences that told us words can have separate meaning based on the context used. But the computer doesn’t understand words or its context. Here distributed representation of words plays a big role.In distributed representation word describe as 50-300-dimensional vector. Word embeddings know as word representation that transfers human interpretation of language to the machine. Many NLP problems can be solved through word embeddings. There are so many neural network-based algorithms coming in the natural language processing field. In RNN we give input as a sequence of words. Many researchers showed that if we give these neural network distributed representation words, they perform better for various NLP tasks.Among all of the word embeddings models Word2Vec is most popular which is proposed by Mikolov and Dean [1].Glove [2] is another word embedding model which is presented by Pennington et al. Facebook developed another word embedding model which is known as fastText[3]. Here we will analyze these three words embedding models for Bengali words.II. Literature ReviewRecently There are so many works have been done for Bangla word embedding. In 2016 Abhishek et al. proposed a neural lemmatizer [5] for Bengali word embedding which used Word2Vec model. Adnan Ahmad and Mohammad Ruhul Amin [4] collected large dataset. That dataset has 2,185,701 articles which have 51,920,010 sentences. For the Word2Vec model, they took words that were occurred a minimum of 5 times. They released a Word2Vec word embedding model which has dictionary size over 200k of unique words. Nowshad et al [6] made a multi-label sentence classification model. Where they used LibSVM and Scikit-learn in 5000, 7500 and 10000 sentences corpus.In 2018 Ritu [7] et al. analysis most used embedding models for Bengali words. They used SUMono [10]</s>
<s>dataset as well as their own dataset to train their model. They showed differences between Word2vec and fastText models. Same year Sumit et al. from Socian [8] Ltd made a Word2Vec model which was trained on 623,510,478. They used CBOW and Skip-Gram with embedding vector 150 and 1,245,974 words.Islam, Md Saiful [9] first time in Bengali embeddings used ANN, CNN, RNN. They trained CBOW, Skip-Gram of Word2Vec and fastText as well as Glove in ANN, CNN, RNN model. From what we saw that fastText (Skip-Gram) has the highest accuracy and Glove has the lowest accuracy in ANN, RNN, CNN models.III. MethodologyA. Data Collection and Pre-processingIn natural language processing, we do need a large amount of data. We made a web parser which collected around 500000 articles from the various website. For legal reasons and copyright issue, we cannot disclose the website’s names. From 500000 articles we have selected 105000 articles for our word embedding and named it corpus105k. Then we trimmed the whole dataset with a minimum of a single word occurring 25 times. If a random word enumeration is less than 25 then we discarded that word. Table 1 demonstrate total words in the corpus105k dataset with unique words. Vector Representation of Bengali Word Using Various Word Embedding ModelAshik Ahamed Aman Rafat1, Mushfiqus Salehin2, Fazle Rabby Khan3, Syed Akhter Hossain4 and Sheikh Abujar5Dept. of Computer Science and Engineering, Daffodil International University, Dhaka, Bangladesh E-mail: 1aman15-6858@diu.edu.bd, 2mushfique15-7056 @diu.edu.bd, 3rabby15-6727 @diu.edu.bd 4aktarhossain@daffodilvarsity.edu.bd, 5sheikh.cse@diu.edu.bdAuthorized licensed use limited to: University of Exeter. Downloaded on June 17,2020 at 15:05:03 UTC from IEEE Xplore. Restrictions apply. 8th International Conference on System Modeling & Advancement in Research Trends, 22nd–23rd November, 2019 College of Computing Sciences & Information Technology, Teerthanker Mahaveer University, Moradabad, India28 Copyright © IEEE–2019 ISBN: 978-1-7281-3245-7Table 1: Dataset demographyTotal words before trimming 32112604Unique words before trimming 505383Total words after trimming 30731553Unique words after trimming 53473B. Word EmbeddingWhen we communicate, we connect words according to their meanings. We say/write a word in the context of previous words or its surrounded words. But the computer doesn’t understand this kind of things. So, researchers have published many word embedding models which can help a machine to understand the context of sentences1. Skip-gram: This model predicts the nearby words when a target word is given. Consider an example “আিম বাংলায় গান গাইেত ভালবািস”. If we take the middle word ‘গান’ as a target word, Skip-gram model will predict the possibility of ‘আিম’, ‘বাংলায়’, ‘গাইেত’, ‘ভালবািস’ as a surrounding word.Fig. 1: Skip-gram Model Architecture2. Continuous Bag-Of-Words (CBOW): cbow model works the exact opposite of skip-gram model. This model predicts middle words when nearby words are given. From the previous example if you take surrounding words ‘আিম’, ‘ বাংলায়’, ‘গাইেত’, ‘ভালবািস’ than cbow model will predict the possibility of ‘গান’ as middle word.Fig. 2: CBOW Model Architecture3. Global Vectors (GloVe): Glove build a big matrix which is co-occurrence of data, containing data on how often each word occurs. Afterward, glove minimizes this matrix into a lower-dimensional matrix using reconstruction loss.4.</s>
<s>FastText: This model is the addition of the Word2Vec model. fastText make every word as n-gram of characters. For example, if we take ’খবর’ as a word with n=2 than fastText model represent it as <খ, খব, বর, র>. Here angular brackets specify the start and end of the word.C. ExperimentWord embedding has so many models. Among them, we choose to work with five models which are prediction-based models. • Word2Vec (Skip-Gram) • Word2Vec (CBOW) • FastText (Skip-Gram) • FastText (CBOW) • Glove Word EmbeddingsIn our dataset, we have 32 million of words where we discarded words that have been occurred less than 20 times. Thus, this makes 53000 words of vocabulary size. We used gensim library for Word2Vec models, fastText Facebook code’s for fastText model and glove-python library for Glove model. Each model trained on 50 epochs. Parameters which are used in the model are shown in Table 2.Table 2: Parameters (embedding dimension, window size, learning rate, minimum word count, negative sampling)Model dim win alpha count negWord2Vec (Skip-Gram) 300 5 0.03 20 20Word2Vec (CBOW) 300 5 0.03 20 20Glove 300 5 0.05 - -fastText(skip-gram) 300 5 0.03 20 20fastText (CBOW) 300 5 0.03 20 20IV. Performance & DiscussionTo check our models, we validated the models with two experiments.A. Nearby wordsWe gave our models a word to show us ten nearby words. Each of the models gave us a different result. But diversity in the models is not huge. Table 3, 4, 5, 6, 7 shows different models result. In nearby words column, the left-most word is the best match for the context word and right-most word is the least match for the context word.Authorized licensed use limited to: University of Exeter. Downloaded on June 17,2020 at 15:05:03 UTC from IEEE Xplore. Restrictions apply. Vector Representation of Bengali WordUsing Various Word Embedding Model Copyright © IEEE–2019 ISBN: 978-1-7281-3245-7 29Table 3: Word2Vec (Skip-Gram)Word Nearest Word ' ', ' ', ' ', ' ', ' ', ' ', ' ',' ', ' ', ' ' ', 'ঈ -উ - ', ' ', ' - ', ' ', ' ', ' ছ ', ' ', ' - ', ' ' ' উ ', ' ', ' ', ' উ ', ' ',' ', '’', ' ', ' ', ' উ ' ' ', ' ', ' ', ' ', ' ',' ', ' ', ' ', ' ', ' ' ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ' Table 4: Word2Vec (CBOW)Word Nearest Word ' ', ' ', '‘ ', ' ', ' ', ' ', ' ', ' ', ' ', ' ' ' ', 'ঈদ-উ - ', ' ', ' দ ', ' ', ' ', ' ', ' ', ' ', 'ঈদ ' ' ', ' ', ' উ ', ' ', ' ', ' উ ', ', ' ', ' ', ' ' ' ', ' ', ' ', ' ', ' ', ' ',' ', '</s>
<s>', ' ', ' ' দ 'দ ', ' দ ', 'দ ', 'দ ', ' ', ' ','দ ', ' ', ' ', ' ' Table 5: GloveWord Nearest Word ' -ই', ' ', ' ', ' ', ' ', ' ', '‘ই ', ' ', ' ', ' ' ' ', ' ', ' ', ' ', ' ',' ', ' ', ' ', ' ', ' ', ' - ', ' ', ' - ', '‘ ', ' ', ' ', ' – ', ' ', 'ই - ' ' ', ' ', ' ', ' ', ' ', ' ',' ', ' ', ' ', ' ' ', ' ', ' ', ' ', ' - ', ' ', ' ', ' ', ' ', ' ' Table 6: FastText (Skip-Gram)Word Nearest Word ' ', ' ', ' ', '‘ ', ' ', 'উপ- ', ' ', ' ', ' ', 'উপ ' ', ' ', ' ', ' ', ' ', ' ', 'ঈদ-উ - ', ' - ', ' ', ' ' ' উ ', ' ', ' ', ' ', '‘ ', '’', ' ', ' ', ' উ ', ' ' ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ' দ 'দ ', 'দ ', 'দ ', ' ', ' দ ', 'দ ', ' দ ', 'দ ', 'দ ', ' ' Authorized licensed use limited to: University of Exeter. Downloaded on June 17,2020 at 15:05:03 UTC from IEEE Xplore. Restrictions apply. 8th International Conference on System Modeling & Advancement in Research Trends, 22nd–23rd November, 2019 College of Computing Sciences & Information Technology, Teerthanker Mahaveer University, Moradabad, India30 Copyright © IEEE–2019 ISBN: 978-1-7281-3245-7Table 7: FastText (CBOW)Word Nearest Word ' ', ' ', '‘ ', ' - ', ' ', ' ', '‘ ', ' ', ' ', ' ' ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ' ‘ ', ' ', ' ',' ’, ' ', ' - ', ' - ', ' - ', ' ', ‘ ' ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', '‘ ', '‘ ' ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ' B. Cosine Similarity WordsCosine similarity in words measure the angle between them. To get similarity between two vectors we used cosine similarity. Table 8 and 9 showing cosine similarity between couple of words in angle.Table 8: Cosine Similarity and the Angle between Bengali Words (ছেলে - মেয়ে)Model Cosine Similarity AngleWord2Vec (Skip-Gram) 0.79 36.99Word2Vec (CBOW) 0.73 42.28Glove 0.51 59.12fastText (Skip-Gram)) 0.80 36.38fastText (CBOW) 0.73 42.61Table 9: Cosine Similarity and the Angle between Bengali Words (রাজা – রািন)Model Cosine Similarity AngleWord2Vec (Skip-Gram) 0.73 64.23Word2Vec (CBOW) 0.43 67.34Glove 0.73 43.11fastText (Skip-Gram)) 0.49 60.39fastText (CBOW) 0.42 64.98Five different models produced different word embeddings.</s>
<s>Among them fastText Skip-Gram showing us some good result. It produced similar words as well as slightly moderate words. Glove gave us the worst result out of all five models. It produces some good result for specific words but sometimes generated random words. Second best result with coming from fastText CBOW model. It gives us a similar result as fastText Skip-Gram model but some random changes in certain words. Both of the Word2Vec model giving us a result which good but not great.V. Conclusion and Future WorkNatural language processing is not an easy task and Bangla language one of the complexes in the world. In our work, we give some intuition for Bengali word embedding. We have trained all of the models over 32 million of words although we have more than 1 billion words of the dataset. Due to computational limitation, we weren’t able to train that dataset. But in 32 million of words fastText gave some satisfactory result. The main purpose of word embedding to learn its surrounding words. We try to give result of nearest word of every model. For future work, we will try to investigate how result differs in those models if we trained them 1 billion of Bengali words.Refrences[1] Mikolov, T., Chen, K., Carrado, G. and Dean, J. (2013). Efficient Estimation of Word Representations in Vector Space. 1st ed. [ebook] Available at: http://arxiv.org/pdf/1301.3781.pdf [Accessed 20 Nov. 2015].[2] GloVe: Global Vectors for Word Representation Jeffrey Pennington, Richard Socher, Christopher D. Manning.[3] Armand Joulin et al. “Bag of tricks for efficient textclassication”. In: arXiv preprint arXiv:1607.01759(2016).[4] A. Ahmad and M. R. Amin, “Bengali word embeddings and its application in solving document classification problem,” 2016 19th International Conference on Computer and Information Technology (ICCIT), Dhaka, 2016, pp. 425-430.[5] Chakrabarty, A. & Garain, U. (2016). BenLem (A Bengali Lemmatizer) and Its Role in WSD. ACM Transactions on Asian and Low Resource Language Information Processing ACM Trans. Asian Low-Resour. Lang. Inf. Process., 15(3), 1-18. doi:10.1145/2835494.[6] Nowshad Hasan, Md & Bhowmik, Sourav & Rahaman, Md. (2017). Multi-label sentence classification using Bengali word embedding model. 1-6. 10.1109/EICT.2017.8275207.[7] Ritu, Zakia & Nowshin, Nafisa & Nahid, Md Mahadi & Ismail, Sabir. (2018). Performance Analysis of Different Word Embedding Models on Bangla Language. 1-5. 10.1109/ICBSLP.2018.8554681.[8] Sumit, Sakhawat & Hossan, Md. Zakir & Muntasir, Tareq & Sourov, Tanvir. (2018). Exploring Word Embedding for Bangla Sentiment Analysis. 10.1109/ICBSLP.2018.8554443.[9] Islam, Md Saiful. (2018). A Comparative Analysis of Word Embedding Representations in Authorship Attribution of Bengali Literature.[10] M. A. Al Mumin, A. A. M. Shoeb, M. R. Selim, and M. Z. Iqbal, “Sumono: A representative modern bengali corpus.Authorized licensed use limited to: University of Exeter. Downloaded on June 17,2020 at 15:05:03 UTC from IEEE Xplore. Restrictions apply.</s>
<s>LNCS 9041 - A Computational Approach for Corpus Based Analysis of Reduplicated Words in Bengali© Springer International Publishing Switzerland 2015 A. Gelbukh (Ed.): CICLing 2015, Part I, LNCS 9041, pp. 456–466, 2015. DOI: 10.1007/978-3-319-18111-0_34 A Computational Approach for Corpus Based Analysis of Reduplicated Words in Bengali Apurbalal Senapati1 and Utpal Garain2 1 Central Institute of Technology, BTAD, Kokrajhar-783370, Assam, India apurbalal.senapati@gmail.com 2 Indian Statistical Institute, 203, B.T.Road, Kolkata – 700108, India utpal.garain@gmail.com Abstract. Reduplication is an important phenomenon in language studies espe-cially in Indian languages. The definition of reduplication is the repetition of the smallest linguistic unit partially or completely i.e. repetition of phoneme, morpheme, word, phrase, clause or the utterance as a whole and it gives differ-ent meaning in syntax as well as semantic level. The reduplicated words has important role in many natural language processing (NLP) applications, namely in machine translation (MT), text summarization, identification of multiword expressions, etc. This article focuses on an algorithm for identifying the redu-plicated words from a text corpus and computing statistics (descriptive statis-tics) of reduplicated words frequently used in Bengali. Keywords: Reduplication, Bengali, Corpus, Descriptive statistics, Evaluation. 1 Introduction Reduplication is one of the highly productive morphological processes in Bengali. It is frequently used in the language for various linguistic and pragmatic reasons and purposes. The use of reduplicated words in text or corpus is in different ways and manners to serve various means of information-sharing and communication. Although it is mostly used to express a sense of multiplicity of various countable items, it is also used as a process to refer to the act of continuation of an action or an event [1] or something else. For example, S1: আপিন েকান gােম েযেত চান ? / aapni kon grame jete chan ? (Which village do you want to visit?); S2: আপিন েকান েকান gােম েযেত চান ? / aapni kon kon grame jete chan ? (Which are the villages you want to visit?). Clearly in sentence S2, the semantic changes to plural and it is due to the use of reduplication of word েকান /kon (which). Similarly for example S3: ঘের েকান েলাক নাi /ghare kono lok nai (There is no one in the house); S4: ঘের ঘের েবকার যুবক / ghare ghare bekar jubak (unemployment is in every house). The semantic meaning of reduplication of ঘের/ghare (in house) in S3 and S4 are different. In S3, meaning of ঘের /ghare is “in house” but in S4, the meaning of ঘের ঘের /ghare ghare is “in every house”. Now it is clear that in many NLP applications especially in MT, the semantic of reduplication has to be considered carefully in order A Computational Approach for Corpus Based Analysis of Reduplicated Words in Bengali 457 to achieve high accuracy. For example, the machine translation (using Bengali to English Google translator, dated 28th January, 2015) of sentences S5: েস েখেত আসেছ / se khete aaschhe and S6: েস েখেত েখেত আসেছ / se khete khete aaschhe is “He is coming to eat” in</s>
<s>both cases. But the actual translations are “He is coming to eat” and “He is coming while eating”, respectively. It is obvious that the wrong translation producing in sen-tence S6 due to failure in capturing the semantic of reduplication. Similarly in many NLP applications the reduplication has to be tackled separately in order to reduce semantic analysis error. 2 Types of Reduplicated Words in Bengali The process of reduplication is quite frequent in Bengali. A large number of words are capable of producing valid reduplication. But practically most of them are not used or used with very low frequencies. It is also observed that the reduplication can occur to all word categories including the pronouns and indeclinable. From the structural point of view there are six types of reduplication in the Bengali texts [1], which are as follows: i) The repetition of same word as a second member without the addition of any suf-fix of inflectional properties with any member i.e. the proper reduplication. Examples, হািস হািস / hasi hasi (smiling) [হািস/ hasi (smile)]; বছর বছর / bachhar bachhar (every year) [বছর / bachhar (year)]; লাল লাল / lal lal (red in plural sense) [লাল / lal (red in singular sense)]; ভােলা ভােলা / bhalo bhalo (good in plural sense) [ভােলা / bhalo (good in singular sense)]; িদন িদন / din din (day by day) [িদন / din (day)] etc. Note that in this category, each individual word has a valid mining which is different (same on some cases) their re-duplicated meaning. ii) The first word is repeated and while first word carries no inflection but the se-cond word carries an inflection. Examples, ধব ধেব /dhab dhabe (pure white color) [ধব/dhab and ধেব /dhaeb are not valid words], টক টেক/tak take (deep red color) [টক/tak (sour) but টেক/take (not a valid word)], লক লেক/lak lake (flickering / attractive) [লক/lak and লেক/lake are not a valid words] etc. Note that in this category, each individual word is not a valid word whereas the reduplicated words are meaningful. iii) The first word is inflected and then inflected word is repeated. Example, ঘের ঘের / ghare ghare (in every house) [ঘের / ghare (in house)], কােন কােন / kane kane (secretly) [কােন / kane (in ear)], গােছ গােছ / gachhe gachhe (in every tree) [গােছ / gachhe (in tree)] etc. Note that, this category is also proper reduplication and in this case also the semantic be-havior is same as of category i). iv) A semantically or almost similar word is added with the first word to generate the reduplicated word. Example, চাল চুেলা / chal chulo (economically poor) [চাল/chal (rice), চুেলা / chulo (cooking burner)], চুির চামাির / churi chamari (robbery) [চুির / churi (theft), চামাির / chamari (illegal work)], aিল গিল / ali gali (narrow lane with complicated direction) [aিল / ali (narrow lane), গিল / gali (narrow lane)] etc. Note that, in this case the semantic meaning of each individual word is almost same and their re-duplicated meaning is also almost</s>
<s>similar to the individual word. 458 A. Senapati and U. Garain v) An eco word is added as the second member with the first word to generate the reduplicated word. Example, জল টল/ jal tal (water, beverage etc.) [জল / jal (water) and টল/ tal (eco word)], খাবার দাবার/ khabar dabar (varieties food) [খাবার/ khabar (food) and দাবার/ dabar (eco word)], মাছ টাছ/ mach tachh (egg, fish, meat etc.) [মাছ/ mach (fish) and টাছ/tachh (eco word)] etc. Note that, in this case the first word has a specific meaning but after adding the eco word the meaning changes. Also note that the composite meaning is almost similar to the first word but in plural form but this property does not follow in all cases. vi) Onomatopoeic words made with two words of identical structures. Examples, ছম ছম / chham chham (feeling of sound of silence), িখল িখল /khil khil (sound of laugh), িঝন িঝন /jhin jhin (jingling) etc. Note that, this category is also proper reduplication and in this case the semantic meaning is related to sound (real or virtual) of different events. Whereas, the reduplicated words can be classified in other perspective like phono-logical perspective, morphological perspective, lexical perspective, constructional perspective, etc. In functional point of view, reduplicated can be classified based on the part of speech also. But our present work only concentrate on the computational aspect of identifying the reduplicated words from the corpus. 3 Existing Work on Reduplicated Word in Bengali Most of the existing works on reduplication is contributed by the linguistic people and it has started long back in many Indian languages. Ananthanarayana [2] describes the reduplication in Sanskrit and Tamil, Abbi [3] focuses on the different aspect of redu-plication on south Asian language, Murthy [4] worked on Kannada language, etc. The work on Manipuri reduplicated is found in identification of multiword expressions by in Nongmeikapam [5] work. In Bengali language, the linguistic study found from Chattopadhyay [6] , Chaudhuri [7], Thompson’s [8] work. In computational point of view, Bandyopadhyay [9] has studied reduplicated words for semantic based analysis. Senapati [10] has studied the reduplicated pronoun in their anaphora resolution task in Bengali. 4 Our Contribution From the literature survey it is clear that most of analysis is on the linguistic point of view and the works are common in nature i.e. analysis of reduplicated words and tried to capture their semantic meaning. Whereas the computational works are limited. But some basic issues like how many reduplicated words are there in Bengali or what are the frequencies in which reduplicated words appear in Bengali, etc. i.e. the corpus based statistics are still not studied. We have proposed an algorithm to identify the reduplicated words from a text corpus and also proposed a dictionary based tuning technique to enhance the accuracy of identifying such word in the corpus. Finally, the frequencies of reduplicated word have been calculated in word level as well as in sentence level. A Computational Approach for Corpus Based Analysis of Reduplicated</s>
<s>Words in Bengali 459 5 Computational Approach to Identify Reduplication in Bengali Our computational approach is based on the morphological similarities of the dupli-cated words. In our work, the morphologically similar reduplicated words implies that the similar or almost similar words in terms of their word length and use of characters or use of vowel modifiers in the words. In section 2, we have seen that in category (i), (iii) and (vi) the formation of reduplication by the repetition of same word i.e. of the form “w w” where “w” is word in the corpus. Also we observe that, in category (ii), (iv) and (v) the formation of reduplication by the repetition of almost similar word. And hence from the computational aspect we define the reduplicated words of two types. The proper reduplication i.e. when the repetition of same word; for example, েখেত েখেত / khete khete (continue eating), েযেত েযেত / jete jete (continue going), where above category (i), (iii) and (iv) come under this type. The other type is the partial reduplica-tion i.e. first and second word is not exactly same but almost similar; for example, খাবার দাবার/ khabar dabar/ (food etc.), চাল চুেলা / chal chulo (economically poor), where above category (ii), (iv) and (v) come under this type. There are some exceptional cases, e.g. মাথা মুnু/matha mundu (meaning lass), েলাটা কmল/lota kambal (belonging of poor man), etc. and we are now not considering these cases. The computational approaches for identi-fying two different types of reduplication are also different and handled by two differ-ent algorithms. Finally, to reduce the error we have used a dictionary and frequency based tuning technique. The details descriptions of the algorithms are given below. Table 1. Algorithm to find the proper reduplication from the text corpus ALGORITHM s1: wi ← word from corpus s2: if wi contains “-” then s3: if wi is of the form “w-w” then // type 2 s4: print “re-duplication”; s5: frequency= frequency+1; s6: end if s7: else if wi is of the form “ww” then // type 3 s8: print “re-duplication”; s9: frequency= frequency+1; s10: end if s11: else s12: wi+1 ← next word from corpus s13: if “wi is equal to wi+1” then // type 1 s14: print “re-duplication”; s15: frequency= frequency+1; s16: end if s17: end if For algorithmic approach, first we analyzed the proper reduplication in terms of morphological similarity. In lexical point of view the proper reduplication is of three 460 A. Senapati and U. Garain types. The first type is of the form “w w” i.e. repetition of same word with a single space; for example, েখেত েখেত / khete khete (continue eating). The second type is of the form “w-w” (or “w - w”) i.e. repetition of same word with a “-” separation, for exam-ple, ধীের - ধীের/dhire dhire (slowly) and the third type is of the form “ww” i.e. repetition of same word without any space; for example, গজগজ/gajgaj (feeling of irritation). The formal algorithm of this category is given</s>