text
stringlengths 61
23.7k
|
|---|
<s>clusterbangla word using their semantic and contextual similarity. In this approach they tried to clusterthe words based on the idea that, the words that have similar meaning and are used in similarcontext in a sentence, belong to the same cluster.Their work was slightly upgraded later in 2016 by Urmi, Jammy and Ismail [1]. They pro--8-posed a unsupervised learning approach to identify stem or root of a Bangla word from contextualsimilarity of words. Their object was to build a big corpus of Bangla stems along with their respec-tive inflectional form. They worked with the assumption that if two words are similar in spellingand are used in similar context in many sentences, they have a higher chance of originating fromthe same root. They implemented 6-gram model for stem detection and achieved an accuracy of40.18%. They have concluded that with big amount of text data this model will improve further.Some example of clusters they found are given in table I.Table 2.1: Clusters formed using N-gram approach[1]Root Word Word in Clusterেছাট েছাটেদর, েছাট্ট, েছাটখাট, েছােটাএখন এখনকার, এখেনা, এখনই, এখনওআইন আইনী, আইিন, আইনত, আইনমন্তৰ্ী,িঠক িঠকমেতা, িঠকভােব, িঠকঠাকগাছ গাছগুেলা, গাছিট, গাছপালার, গাছপালাVarious other approaches to produce word clustering have also been explored in other works.We can see an example of this in the works of Sinha, Dasgupta and Jana [9], who proposed to con-struct a Bangla semantic lexicon which is hierarchically organized. To measure semantic similaritybetween two Bangla words they applied a graph based edge-weighting approach. This lexical or-ganization is represented by a graphical user interface developed by them. They have also addedsome details to the words like, whether it is a mythological word or not or if it a verb or not etc.Researchers then focused on producing word clustering in dynamic approach and its perfor-mance. We get insights about this from the works of Yuan [10], who showed that word clusteringtechnique that is based on word similarities is better than conventional greedy approach in termsof speed and performance. The basic approach of this work was to check for a certain word inthe corpus, its co-occurring words for similarity. That is to say, if two words are similar, theirco-occurring word pattern will also be similar. Based on this they computed word clusters andwhen compared with other clustering methods, this approach was found to be more efficient.In case of Bangla language, Hadi, Khan, Sayeed [11] proposed a framework for extracting se-mantic relations in Bangla words. They discussed extraction of Synonyms, Hyponym, Hypernym,-9-Antonyms, etc as a rule based model. They used semantic analyser on nouns, adjectives and verbsto do this.The performance of dynamic models in producing word clusters was shown by Ahmed andAmin [12]. They discussed the effect of Bangla word embeddingmodel in document classification.They worked with a dataset prepared from Bangla newspaper documents. They applied word2vecmodel to generate vector representation of words for word clustering. Using this they preparedclustering of word embeddings that are found in close proximity to each other in feature space.This information was later used as features to solve Bangla document classification problem.Upgrading the</s>
|
<s>performance of word2vec in finding vector representation of words in hugedatasets like a dataset containing one billion words were attempted by Rengasamy, Fu, Lee andMadduri [13]. They applied word2vec in a multi-core system and found that this approach is3.53 times faster than original multi-threaded word2vec implementation and 1.28 times faster thanrecent parallel word2vec implementation.Ma and Zhang [14] discussed the effect of word2vec in reducing the dimensionality of largedatasets. They found out that, in dealing with large scale training data, word2vec helps in clusteringsimilar data. This strategy can reduce data dimension and speed up multi-class classifications.Robert Bamler and StephanMandt [15] tried to find the semantic evolution of individual wordsover time in time-stamped datasets. They applied Word2vec model to produce the embeddingvectors. They showed experimentally that both skip-gram filtering and smoothing lead to smoothlychanging embedding vectors that help predict contextual similarities at held out time stamps.Fasttext model is a relatively new model ventured in producing word clusterings. It is a vari-ation of skip gram model architecture of word2vce model which was proposed by Bojanowski,Grave, Joulin and Mikolov [16]. The method they followed was, each word was represented asa bag of character n-grams and vector representation was constructed from them. This allowedthem to construct word clusters for words not present in the training data. They concluded that thismethod gives state of the art word representations for both similarity and analogy task.Finally, we can say that rich literature is growing on the construction and implementation ofBangla wordnet and word embedding and word clustering techniques in Bangla natural languageprocessing.-10-2.2 WordNets In Other LanguagesBecause of the importance ofWordNet, many attempts have beenmade to develop theWordNetfor many languages all over the world. As stated earlier, the PrincetonWordNet[6] first introducedthe concept and proposed a baseline for constructing a WordNet in any language. Following themmanyWordNets have been constructed so far and many are being developed in different languages.We discuss some of these works in this section.The Arabic WordNet (AWN)[17] was constructed based on the design and contents of the uni-versally accepted Princeton WordNet(PWN) and was mapped onto PWN 2.0. This was achievedby English to Arabic translation and for each Arabic word, finding all its senses and assigning itto its proper synset.The PolishWordNet titled PolNet[18] only groups nouns and verbs. Vetulani andKochanowskiapplied the ”merge model” approach in developing the Polish WordNet. It currently consists of11700 synsets for nouns and 1500 synsets for verbs.The construction of Hindi WordNet[19] first followed the construction method of PWN andcategorizedwords into noun, adjectives, verbs and adverbs. But because of the complexity of Hindilanguage this did not prove to be very fruitful. After exploring other categorization they finallydecided on dividing the words on two basic categories, one for words that can be categorized bythe universal process and the other for specific Indian words that needs specific handling.The insights from the Hindi WordNet plays a big role in the construction of WordNet for otherIndo-aryan languages. This was reflected in the works of the Indo WordNet[4] project. It gives anoverview of the construction process ofWordNet for Indo-aryan languages</s>
|
<s>like Gujrati, Assamese,Malaylam, Tamil, Telegu, Bangla etc. It reached the decision that the WordNet for these languagescan be developed by the ”merge and expansion method” on the basis of the Hindi WordNet.-11-Figure 2.3: Linked Indo WordNet structure[4]Marathi WordNet developed by Ram and Mahender[20] created a database that consists ofmore than 10,000 Marathi words with its noun, adjective, verb, adverb and semantic relationslike synonymy, Hypernymy, Holonymy and ontology in the form of multiple tables which are inrelationship. This WordNet has 36842 unique words grouped in more than 26988 Synsets.The Sanskrit WordNet[21] is also developed following the merge and expansion method wherethey considered the Hindi WordNet as the source resource. This WordNet includes features likeverbal concepts and gender but does not include semantic relation like ontology.Discussing the above mentioned works we get an overall idea about the construction of a newWordNet for any languages and the approaches applied so far for doing so. This also gives us anidea about the approach of this task for an Indo-aryan language like Bangla. We can also havean idea about the challenges faced in this task. This will help us in choosing our approach to-12-accomplish this task.2.3 Uses of WordNetWordNet not only serves as a lexical resource for any language but it can also contribute inmany other NLP related research works as well as give insights and help take decisions in manylanguage based applications. We shed light on some of the works concerning the use of WordNetin this section.AsWordNet is structured based on word similarity it can be used to measure the Relatedness ofConcepts. This was discussed by Pedersen, Patwardhan and Michelizzi[22]. They proposed thatWordNet can prove to be really useful in measuring similarities between concepts as it organizesnoun and verbs in hierarchies according to their relation. As WordNet stores information aboutboth similarity and dissimilarity of words and the context they are used in, all this information canbe utilized to establish similarity measurements between concepts.We can assume from this work that WordNet can also help in categorization of documents.More specifically we can say that with the help of the informations stored in WordNet text cat-egorization can be done. This was attempted by Elberrichi, Rahmoun, and Bentaalah[23]. Theyused the synonymy and the hyponymy relation of the WordNet for the text categorization process.Applying this method they found that the use of conceptual ideas from the WordNet improves textcategorization process than the traditional Bag-of-words approach.WordNet can be a very effective tool in Information Retrieval(IR) systems. As a WordNetestablishes relations between words, the information from the WordNet database can help to im-prove the query results in IR systems. Mandala, Takenobu and Hozumi[24] attempted to developa method of making WordNet more useful in information retrieval applications. Their experimentwas done using several standard information retrieval test collections. They showed that usinga broadened coverage of WordNet and weighting method, their experiment results in significantimprovement of information retrieval performance.Gharat and Gadge[25] proposed a new method for web information retrieval. They applied anew term weighting technique called concept-based term weighting (CBW) to give a weight</s>
|
<s>foreach query term to determine its significance by using WordNet Ontology. This experiment wastested using a web dataset consisting of random web pages. They reached the conclusion that this-13-method gives better performance than traditional TF-IDF term weighting approach.WordNet can also prove to be a big contributor in resolving the senses of ambiguous words. Amethod for Word Sense Disambiguation based on domain information andWordNet hierarchy wasproposed by Kolte and Bhirud[26]. They used a unsupervised approach to determine the domainof a random word as target word in the WordNet domain and the sense of that domain was takenas the sense of the target word.From the discussions above we can perceive that the construction of aWordNet will not only bea lexical database for any language but also can contribute in many other natural language relatedresearch works and help achieve better performances.-14-Chapter 3MethodologyOur thesis work is to construct a Bangla WordNet. A wordnet basically constructs a relationalarchitecture for words where all the words are connected to each other through relations like syn-onyms, hyponyms, hypernyms, antonyms etc. The English WordNet is connected based on allthese relationships. But as the construction of a wordnet in a new language is a huge and chal-lenging work, we are focusing on constructing the Bangla wordnet based on only the semanticrelationship between words. We are trying to connect words with other words that have the samemeaning as it and can be used in a sentence in place of one another.To establish this relationship first we need to group the words having similar meaning. For thiswe need word embedding method. Word embedding is done on the basis of the concept that, wordshaving similar meaning tend to occur in similar context. So, the first step we have implemented isgrouping the words based on their contextual similarity.The previous approaches on word embedding in case of Bangla language mostly involved N-gram models. But as seen from the discussions in chapter 2, dynamic models can prove to bebetter than N-gram models to produce the word clusters. So in the first part of this chapter wediscuss the previous approaches and our reasons behind choosing the word2vec model and give anoverview on how this model works. Then we discuss in detail the steps we have followed in ourimplementation.-15-3.1 Data CollectionThe data used for constructing a wordnet is usually collected from various sources. We haveworked with a large Bangla text corpus in this work to construct the word clusters. We used threeseparate corpus, and merge them. First corpus is, SUMono [27] which contain available onlineand offline Bangla text data. We also use a news corpus, which contain news data from Banglanews websites. We also use Bangla wiki data from wikipedia. The detail is given in the table I.Accuracy of any model largely depends on the dataset it is applied on. If a word is used in variouskind of sentences, then the trained model can be more accurate as it covers a large area of variety.The more frequent the words are, the more accurate the model will be.Figure I</s>
|
<s>represents some of the most frequent words from the corpus.Figure 3.1: Histogram of Most Frequent Words with Number of OccurrencesThis corpus consists of around 5,00,000 unique Bangla words. This corpus contains Banglatext data on various topics. This corpus was built taking contents from various sources like Bangla-16-articles from wikipedia, Bangla news portals articles and from writings of different renowned Ben-gali writers. As the text is collected from various sources it covers topics of different kinds andsectors as well as the language structure of Bangla used in day to day life. This is an importantaspect of the data collected, because to get better and accurate clusters dynamically we need datathat covers vast areas in which any specific word can be used. We are constructing the clustersbased on the context they are used in so large data covering varied topics is necessary to get betterresults. Accuracy of this model largely depends on the dataset it is applied on.Below is a detailed information about the corpus-Table 3.1: Details of the CorpusTotal sentences 1,593,398Total words 2,51,89,733Unique words 5,21,3913.2 Previous approach on word embeddingPreviously there has been many works on word embedding in Bangla and many has tried tofind a good method for word embedding. Most of the previous work techniques have used N-grammodel. We can take the work of Ismail and Rahman [8] as an example. They used a tri-gramlanguage model where two list of words were generated. One contained the preceding two wordsof a certain word and the other contained following two words of that same word.Then for each pair of words in the corpus they then checked total number of matches in thepreceding and following word list and based on that a similarity score was counted. If that scorecrossed a predefined threshold value, then the pair of words are said to be in the same cluster.But later in the works of Ahmed and Amin [12] it was shown that word clusters produced bydynamic models show better performance than clusters produced using N-gram models.They compared each word in the corpus with all the other words in the corpus. 5 lakh by 5 lakhword comparing process was really time consuming and not memory efficient. Another problemwas, as there is no standard dataset they could not compute an actual accuracy, they only gave a-17-subjective score of the accuracy, not accuracy with the actual dataset.3.3 Our approachOver the years many techniques have been introduced for word clustering. Most of the ap-proaches give high accuracy for clustering English words. As Bangla is a more complicated lan-guage, it is hard to gain high accuracy. We attempted three different approaches to determine whichapproach works better for Bangla word clustering. Nowadays vector representation of words havebecome the most popular approach for building word clusters. We applied three machine learningapproaches which are based on vector representation of Words. The full methodology of our workis discussed in detail below.3.3.1 Vector representation of wordsText documents are traditionally represented as the term ”Bag of words” in natural languageprocessing(NLP). This means each document is represented</s>
|
<s>as a fixed length vector where thevector size is equal to the vocabulary size of the data. In each index of this vector, the count ornumber of occurrence of a specific unique word is stored. This process effectively reduces a vari-able length document to a fixed length vector. This vector makes it easier to work with in variousmachine learning models like clustering, classification, topic segmentation etc. The process isshown graphically in the figure below.-18-Figure 3.2: Vector representation of a text document3.3.1.1 Word EmbeddingWord embedding is aword representationmethod that prepares similar representation for wordshaving similar meaning. It is based on the concept that, words that have similar meaning occur insimilar context in a text document. Word embedding gives us the semantic relationship of words ina text document. In word embeddingmethods, words are represented as fixed length vectors. Wordembedding organizes them such a way that words that have similarity in context are represented byvectors that are in close proximity to each other. Word embedding process enables us to leverageinformation from large corpus by constructing their vector representation.Producing word embeddings from vector representation of words is now a very popular methodas it is more efficient than traditional N-gram approach. It reduces memory efficiency and de-creases processing time. So, we can see that utilizing this method we can resolve the difficultiesdiscussed in the previous section. In our work we have applied different variations of word em-bedding models using vector representation of words. But some pre-processing was needed to bedone on the data before feeding it to the dynamic word embedding models. We discuss this in thenext section.3.3.2 Pre-processing stepsWe couldn’t feed a word just as a text string to the word embedding models, we needed a wayto represent the words to the network. For this, we followed some pre-processing steps to prepare-19-the data to be fed to the word embedding models for producing the word clusters. These steps arediscussed below.• Corpus: The corpus is stored as a text file. In the text file the data constitutes of Banglasentences. They can be treated as strings. But We can not feed the strings directly to thismodel, so some pre-processing was done on the dataset.• Tokenizing: We had to pre-process the corpus in order to use them as our proper input.Firstly, we couldn’t feed a word just as a text string to a model. For this, we tokenized eachword. Example:আমার সােথ বাংলায় কথা বল => 'আমার', 'সােথ', 'বাংলায়', 'কথা', 'বল'• Training: We used the tokenized dataset for training. These tokenized words were fed tothe model as input. The model then builds the vocabulary from the input. This vocabularyis then used to generate the word vectors by the different models and construct the wordclusters from them.After following these basic pre-processing steps the data is fed to a dynamic word embeddingmodel to produce the clusters. Now there are various dynamic models to produce the word em-beddings and they each vary in result, performance and accuracy depending on the dataset they areapplied on. Also there isn’t much insight on the</s>
|
<s>performance of different dynamic word embed-ding models in case of Bangla language. So instead of choosing one specific dynamic models, weapplied different variations of dynamic word embedding models on our dataset to find the modelmost appropriate for building the WordNet. We discuss these models in the next section.3.3.3 The word2vec modelWord2vec is a dynamic word embedding model that utilizes vector representation of word andproduces word embeddings from them. In the first step of our implementation, we have appliedthree variations of the word2vec model to produce the word embeddings and compare the results.We discuss the whole process in detail in this section.-20-In this model, neural network is trained with a single hidden layer for performing a task. Butthe network is not actually used to perform that task; instead we learn the weights of the hiddenlayer which are actually the “word vectors”. In order to do so, the neural network must be trained.For example, suppose we are going to train the neural network to do the following task-A specific word, called the pivot word is given in the middle of a sentence; if we look at thewords nearby and randomly pick one, the network will tell us for every word in our vocabulary theprobable nearby word that has been chosen. Here nearby actually means a “window size”, whichis a parameter to the algorithm. If a window size is 3, it means 3 words behind and 3 words ahead,that is 6 in total.The probabilities of our output are related to how likely it is found a word nearby the giveninput. For example, if we gave the trained network the word 'খাই' as input, the higher probablewords for output are 'ভাত', 'মাছ' etc than for unrelated words such as 'খাতা' , ‘কলম’ etc.We have trained the neural network to do this task by giving it pairs of words found in ourtraining data. The example below shows some of the training samples we would take from thesentence ‘েতামার পৰ্িত আমার েকান অি েযাগ েনই’. Here a small window size of 2 is used just for theexample. The word highlighted in red is the input word. And the blue highlighted words are in thewindow.There are two variations of word2vec model, Continuous Bag of Words or CBOW architecture-21-and skip-gram architecture. In CBOW architecture the pivot word is predicted based on a set ofcontext words. In skip-gram architecture the process is reversed. It predicts the context wordsusing the pivot word.The wor2vec model can be implemented by both the Tensorflow library and the Gensim librarypackage. We applied both these methods in our implementation to produce word embedding fordifferent words and compare the results.• Experiment I: Word2vec in TensorflowWe used the official version of sentence embedding implementation of tensorflow. Thecode generates word clusters based on the features of Word2vec using the Skip-Gram modeland the Negative Sampling accelerated classification algorithm. This model needs someparameter specification like the window size, vector size and number of iterations. We trieddifferent window sizes, vector sizes and iterations to find the optimal results for the model.The</s>
|
<s>results vary with the change in these parameters and fine tuning of parameters wereneeded to get optimal results. This is shown in detail in result analysis chapter.• Experiment II: Word2vec from Gensim package(Skip-gram model)The python library Gensim provides Word2Vec class for producing word embeddings. Itis a bulit-in class that follows the basic work process of word2vec and produces the wordembeddings given the window size and vector size. First we implemented the skip-grammodel architecture to produce the word embeddings. Parameter tuning was needed in thiscase too for getting the optimal result. So different window size and vector size was triedout to find the optimal one.• Experiment II: Word2vec from Gensim package(CBOWmodel)As the next step we implemented the CBOW or Continuous Bag of Words architecture bythe built-in Gensim Word2vec package. As discussed before CBOW works in the oppositeway of skip-gram, so there were significant changes in the results from the ones produced byskip-gram architecture. After trying out different window sizes and vector sizes we foundoptimal results which are shown in result analysis chapter.-22-3.3.4 FastText ModelFacebook’s AI Research lab created the FastText library for producing word embeddings andtext classification. This model allows to create an unsupervised learning or supervised learningalgorithm for obtaining vector representations for words. Then it constructs the word embeddingsfrom them which can be later used for text classification process. They also developed pre-trainedword vectors for 157 languages [28], which was trained on available Wikipedia data of those lan-guages using fastText. FastText uses n-grams of a word and create vectors for the sum of all then-grams of the word. A big improvement of this model over other dynamic word embedding mod-els is that it can produce word embeddings for out of corpus words. So, even if a word is given tothis model that was not present in the training data, it can still produce word embeddings for thatunknown word.Although they provide the pre-trained word vectors for Bangla also, we did not use those pre-trained vectors in our implementation. We implemented the fasttext library on our dataset to buildthe word vectors first and then constructed the word embeddings from them.Like word2vec, fasttext also has both Skip gram and CBOWmodel architecture. It also worksthe same way as in word2vec. Skip-gram predicts the context words from the pivot word whileCBOW uses the context words to predict the pivot word. In fasttext library we implemented boththese variations to compare the word embeddings produced by them. That is discussed below.• Experiment III: FastText Skip-gram modelIn this step we first implemented the FastText library Skip-gram architecture for preparingthe vector representation of the words in our corpus and then constructed the word embed-dings from them. The window size, vector size and iteration parameters were specified inthis case. With little parameter tuning this model produced satisfactory embeddings whichare shown in the result analysis chapter.• Experiment III: FastText CBOWmodelAs the next step we implemented the CBOW or Continuous Bag of Words architecture bythe FastText library. We applied it on our dataset and built the word embeddings from them.The parameter tuning</s>
|
<s>for this one was the same as the FastText Skip-gram architecture butthe results varied from these two models as they are the reverse process of each other.-23-By implementing these five variations of dynamic word embedding models on our dataset,we constructed different word clusters for different models and compared the outputs to reacha decision about which model is the more suitable one for the construction process of a BanglaWordNet. The details and outcome of the comparison process is shown in the results analysischapter.3.3.5 Dictionary ParsingTheWordNet not only connects the words but also contains the meaning, definition, uses, partsof speech and other grammatical information of all the connected words. The clustering processhelps us to connect the words and establish the relationship among the words. As the next step, weneed to add the necessary information to the specific words. For this we need a Bangla dictionaryto get the detailed informations of the connected words from it and assign to each word its owninformations. For this purpose we thought of a Bangla to Bangla dictionary. We used the BanglaEkademi Ovidhan which contains details information of Bangla words like its meaning, definition,parts of speech etc. We collected this dictionary and prepared it in text format for our use. Belowis a small example of the dictionary we used.Example:• ইচ্ছা: িব ১ অিভলাষ; সৃ্পহা; বাসনা; রুিচ; পৰ্বৃিত্ত (খাওয়ার ইচ্ছা নাই) ।• আিম: সবর্ বক্তা িনেজ । উত্তম পুরুষ; বােক বক্তা ।Here, for the first example, 'ইচ্ছা' is the target word. Next, its meanings in Bangla 'অিভলাষ' isshown. 'িব' indicates 'িবেশষ ' or noun in English, giving the parts of speech of the word and the restof the string gives synonyms of the target word with the example of its uses.As for the second example, the Bangla meaning of target word 'আিম' is given by 'বক্তা িনেজ' andসবর্ indicates 'সবর্নাম' meaning pronoun in English, indicating the parts of speech of the target word.'উত্তম পুরুষ' means its a first person word and the rest of the string is the uses of the word.We processed the collected dictionary according to our need for parsing. Parsing is the processof analyzing a string or text into logical syntactic components. In our work, the dictionary is parsedinto key-value pairs, where the key denotes a specific word or target word and the value denotesall the necessary informations associated with that particular word. so, if we query using the key,-24-it would return the value which represents detailed information about the word we used as a key.3.3.6 Hierarchy BuildingFor constructing theWordNet next we need to focus on establishing the relationship among thewords and connect them according to that relation building up the WordNet structure. In WordNet,the main relation among words is the synonymy. Synonymous words having the same meaning,denoting the same concept, words that can be used in place of one another are connected to eachother in WordNet. These connected words also has their definitions attached with them. Theseconnected words make small groups and these groups in turn connect to each other via semanticrelations and following</s>
|
<s>this process for all words in a language the WordNet network graduallybuilds up. Also another aspect of WordNet is that the words contain its parts of speech informationtoo. Noun words will connect to its synonyms which are noun too, they in turn will be connectedto their synonyms and this way words belonging to the same parts of speech will ultimately belongto one big group in the WordNet structure.Figure 3.3: Example of Hierarchy of Word relationsFigure 3.3 shows an example structure of the WordNet hierarchy. From the root word 'এক' thenetwork spreads connecting the similar words to the root word.for our implementation, we are building from each word level. First we have to find the con-nected words of each word, establish the relation, then we have to find the words connected tothose connected words present in the first level. The clusters constructed by the word embedding-25-models come in handy in this step. These clusters give us an idea about which words are related towhom. We can build the WordNet network from this information. We have first compared amongthe clusters produced by the different models to choose one appropriate model for building thehierarchical structure of the WordNet. Then using the clusters of that model we have build up thehierarchical clusters for each word, where root word is connected to its most similar ten words inthe first layer. Then in the next layer each of these ten words are connected to its most similar fivewords. Following this process the network is built up. The details and outcomes of the process isshown in the result analysis chapter.3.3.7 Adding Details to Hierarchy StructureIn the previous section, we have discussed building the backbone of the WordNet structureby connecting the words. As the next step, We need to add the meaning, definition and relatedgrammatical informations of a specific word to the hierarchical WordNet structure to complete it.As discussed in the dictionary parsing section, these detailed informations are formatted as key-value pairs for processing. We will now use these key-value pairs to attach the details of the wordto the WordNet structure.Figure 3.4: Mapping with dictionaryFor doing this, we used the root word or the target word as key in the dictionary and got thevalue as output. That value is associated with the details of the word stored in the dictionary.That information is then mapped to the target word. Repeating this process for all the words the-26-hierarchy structure we get the complete WordNet Structure. A graphical representation of theprocess is shown in figure 3.4 where to the previously shown WordNet backbone structure, nowthe details of the root word is mapped from the dictionary. This process has to be repeated for allthe words present in the corpus to complete the WordNet. The outcome of this step is shown in theresult analysis chapter.-27-Chapter 4Result AnalysisAs discussed in the previous chapter, we have implemented five different variations of dynamicword embedding models and compared among the constructed word embeddings. from this com-parison we attempt to find a suitable dynamic word embedding model for</s>
|
<s>constructing the BanglaWordNet. In this chapter we discuss in detail the results obtained from our implementation.First of all, parameter tuning was needed for all the word embedding models to get the optimalresult of that model. We tried to get the most satisfactory results from each approach. We appliedvarious combination of window and vector sizes and checked the results in order to tune the pa-rameters to get the most optimum and satisfactory results. Table 4.1 shows the optimal parametertuning for each approach.Table 4.1: Parameter Tuning for Optimum ResultsModel Window size Vector size IterationExp I: Word2vec in Tensorflow 4 1000 10Exp II: Word2vec from Gensim package (Skip-gram model) 5 400 5Exp II: Word2vec from Gensim package (CBOW model) 5 400 5Exp III: FastText Skip-gram model 5 100 5Exp III: FastText CBOW model 5 100 5Coming to the constructed word embeddings, it can be seen from the results that there are sim-ilarities among the clusters from five different approaches as well as some significant differences.For some words, we got a similar set of words for a set of models. But the variety was also notable.The next section shows the constructed word embeddings of these five models.-28-4.1 Experiment I: Word2vec in TensorflowThis section contains the word embedding constructed by implementing Word2vec in Tensor-flowmodel. These are the optimal results of this model acquired by keeping the vector size at 1000while the window size was 4 and constructed after 10 iterations. As can be seen from the examplesbelow, the cluster contains inflection or different form of pivot word as well as the context wordsthat tend to occur with the pivot word. Here most similar 10 words to the pivot word is shown.Table 4.2: Results from Word2vec in TensorflowRandomWord Words on clusterআমরা আমােদর, আিম, চাই, যখন, তাই, তারা, িক, েসই, িকন্তু, সবাইতাঁর তার, েসই, সােথ, তাঁেদর, একজন, একই, ওই, িতিন, পের, বেলজন পৰ্েয়াজন, সুেযাগ, জেন , পাশাপািশ, তাই, দরকার, কারণ, িকছু, িকন্তু, তােদরেকান েকােনা, এমন, অন , কারণ, তেব, েসটা, বা, েনই, তাই, এখেনাপাের হেব, পারেব, পাির, হেল, পােরন, হেতা, চাই, চায়, পােরিন, থােকহেত েযেত, থাকেত, করেত, হেল, না, রাখেত, তাই, তাহেল, তেব, িদেতবড় সবেচেয়, অবস্থা, খুবই, খুব, আমােদর, অেনক, িকছু, মেতা, মানুেষর, আেছটাকা হাজার, লাখ, েকািট, টাকার, খরচ, পাঁচ, মাতৰ্, িবিকৰ্, িতন, পৰ্ায়নতুন মাধ েম, ৈতির, কাজ, জন , িবিভন্ন, নানা, একিট, এই, সব, একইআমােদর তাই, িকন্তু, েয, েসই, শুধু, এই, অেনক, সব, এখন, আমরাএকিট একিট, একিটদুিট, েযএকিট, একট, একিটসহ, একএকিট, একিটই, একিটভ, েনইএকিট, একিটওআিম আিমই, আিমও, আমও, আিমআিম,আিম, আআিম, আিমষ, কীআিম, আিমেতা, ওেকআিমযায় যায়, যায়আেস, যায়েস, যায়ই, যাঃ, যােব, যােবএই, যািচছ, য়যায়, যাবকেরন কেরনিন, কেরনঃ, কেরন, কেরিন৷, কেরনও, কেরেতন', কেরেছন, কেরণ, কেরিছেলন, কেরিনবরংবছর বছ, চবছর, বছর৷, নছর, ছরছর, দশবছর, বছরকেয়ক, গতবছর,এখন এখেনৗ, এখেনা, এখনকী, এখনেতা, এখনই৷, কীএখন, এখনএই, এখনই, এখেনা, এখিনআবার নাআবার, তারপরআবার, েতর্আবার, আববার, আেরকবার, তারআবার, ডাকবার, েবরুবার, চকবার, েফরবারমেতা মেতাও, মেতা৷, রমেতা, মেতাই৷, জমেতা, মেতানই, মেতাই, এইমেতা, মাপমেতা, মেতানকাজ কাজ৷, কাজও, কােজা, কাজই, কাজটাজ, ওকাজ, কাজিটই, কাজাখ, সৎকাজ, কাজিটেদখা অেদখা, েদখাইত, েদখাক, েদখায়ই, েদখাসহ, েদখায়া, েদখাএইসবই, েদখাত, েদখােবা, েদখােবা-29-4.2 Experiment II:Word2vec fromGensimpackage (Skip-grammodel)In this section we show the word embedding constructed by implementing</s>
|
<s>Word2vec fromGensim package (Skip-gram model). These are the optimal results of this model acquired by keep-ing the vector size at 400 while the window size was 5 and constructed after 5 iterations. As canbe seen from the examples below, the cluster contains similar types of words as the pivot wordand context words occurring with the pivot word. Here most similar 10 words to the pivot word isshown.Table 4.3: Results from Word2vec from Gensim package (Skip-gram model)RandomWord Words on clusterআমরা আিম, েতামরা, েসটা, তাহেল, পাির, এখেনা, হয়েতা, এখােন, কেরিছ, েতামােকতাঁর তাঁেদর, সব্াধীনতার, িযিন, েনন, িনজ, েমিডেকল, েদন, করিছেলন, পিরবােরর, যুদ্ধজন জেন , সুেযাগ, েচষ্টা, কােজ, উেদ্দেশ , মাধ েম, ব াপাের, ব বস্থা, করেল, পযর্ােয়েকান েকােনা, থাকার, ছাড়া, তােত, বসােনার, আপিত্ত, পৰ্েয়াজন, েতমন, উপায়, এমনপাের পারেব, চায়, পােরিন, পােরন, পারত, হেতা, পারব, বাধ , চাই, পারেবনহেত েপেত, েযেত, থাকেত, রাখেত, আনেত, েদাকােনও, িনেত, ঘটেত, েবিশও, লাগেতবড় েছাট, সবেচেয়, িশেরানামায়, েমেয়, গায়েকরা, সুন্দর, েজারটা, িজিনস, মধ িবত্তেদর, েছেলটাকা েকািট, হাজার, পাঁচ, টাকার, লক্ষ, বরাদ্দ, লাখ, বছর, পৰ্ায়, গতনতুন িনযর্াতন, পৰ্িকৰ্য়া, জাতীয়, পূেবর্, মামলা, এলাকায়, েসনাবািহনী, অথর্ৈনিতক, িবচােরর, েজাটআমােদর সবার, এখন, েতামােদর, সবাইেক, েতামার, ইয়াকর্েদর, মানুষেক, এটাই, সব্াভািবক, ওইএকিট এিট, দুিট, অনুযায়ী, বতর্মান, সকল, আকাের, িবিভন্ন, ইিতহােস, নতুন, িনজসব্আিম বললাম, েকেন,হ াঁ, েতামােক, তুিিম,আিমও, েতা,েতামােদর, রািজ,েদবযায় যােব, যােচ্ছ, েগেছ, যাক, যায়, েগেল, আেস, রূপ, ওেঠ, েযতকেরন কেরেছন, কেরিছেলন, েদন, কেরনিন, কেরিছল, করেবন, িহেসেেব, েনন, িবরুেদ্ধ, অসহেযােগবছর গত, পাঁচ, চার, হাজার, মাস, িবশ, দশ, শত, সপ্তাহ, বছেররএখন নািক, এখেনা, িনশ্চয়ই, েপেয়িছ, কথাই, এটাই, যখনই, েকানটা, সব্াভািবক, েসটাইআবার পড়েত, এেকবাের, সবাই, অিফেস, সবিকছু, বািড়েত, পিরষ্কার, গুিল, কেরই, মানুষিেটমেতা েবশ, েলেন্স, সুন্দর, অেনকটা, ভিতর্, পুেরা, সব্চ্ছ, েফাকাস, েলেন্সর,আেলােতকাজ কােজ, েশেষ, সহায়তা, ভয়াবহ, সংগৰ্হ, িদেয়ই, পৰ্ভাব, ইচ্ছা, করেতা,অসাধারণেদখা কেম, রেয়, ফুসফুেস, গেবষণায়, জানা, েবেড়, পাওয়া, েরাগী, রূপ, জীবদ্দশা-30-4.3 Experiment II: Word2vec from Gensim package (CBOWmodel)In this section we show the word embedding constructed by implementing Word2vec fromGensim package (CBOW model). These are the optimal results of this model acquired by keep-ing the vector size at 400 while the window size was 5 and constructed after 5 iterations. we cansee from the examples below, the cluster does not contain similar words or context words of thepivot word rather it has given noisy output. Here most similar 10 words to the pivot word is shown.Table 4.4: Results from Word2vec from Gensim package (CBOW model)RandomWord Words on clusterআমরা ইয়াকর্েদর, যতই, েপঁৗছােত, েহাক, িলডাররা, েশয়ািরেঙর, অবশ ই, কথাও, িচন্তা, দুবারইতাঁর তাঁরা, ৈতিরেত, েকা, সারা, বজর্েনর, সৃিষ্টেত, ভাইরাস, অপিরেশািধত, কলফিন, িলিপরজন যুিক্ত, িচিহ্নত, েজারদার, পিরেবশ, আশব্াস, পৰ্কাশ, সমােলাচনা, অন্তভূর্ক্ত, পৰ্ত াখ ান, পৰ্িতেরাধেকান েকােনা, পৰ্ত য়, িকছুরই, কারণ, বইেত, েতমনভােব, আধ ািত্মকতা, সাবেজেক্ট, ঘেটিন, ইন্ডািস্টৰ্রপাের ঈষর্ায়, দগ্ধ, অিভবাসী, পাঠ বই, পারেবন, পারত, পারেব, েপেরিছল, অসব্ীকৃিত, নািমজউিদ্দনহেত হেতই, থাকেত, িদেত, যাইেত, িফল্মগুেলা, েপেত, িনেত, সািজয়া, ঘটােত, জানেতবড় ফুেটা, েশ্লষ্মা, দরজাটা, আেলা, পাথেরর, েলেজর, চওড়া, রাস্তা, অন্ধকার, খােটাটাকা বছেরর, 'ছয়, চলাচেলর, দশ, সপ্তােহর, পূবর্বেঙ্গ, লাখ, েজলার, উপলেক্ষ, গান্ধীরনতুন গান্ধীেক,পািকস্তান, বণ্টন, কৰ্য়, মন্তব , িশক্ষা, পিরকল্পনার, অিধগৰ্হণ, মাউন্টব ােটন, পৰ্স্তািবতআমােদর েবাঝার, সিত , রািখ, েডথিসিটেত, তরুণী, ভাবনা, ঢুকেত, েকানটা, শুধু, কখনএকিট বন্দেরর, শাসন, আইন, িহেসেব, সব্াধীনতা, ভারেতর, সংখ াগিরষ্ঠ, েরাধ, সরকার, সব্রাজআিম েতামােদর, জখম, আিমও,</s>
|
<s>েমেরছ, রাগ, েববী, ব াটা, থািক, নােচর, আেছইযায় েযত, যােব, িদত, ক ান্টনেমন্ট, উপহারগুেলা, যারাই, আধাজন্তু, জন্মােব, থােক, িকৰ্য়াশীলকেরন হওয়ার, বাস্তবায়ন, পদেক্ষপ, বাঘা, মেনাভােবর, গভনর্র, ব বস্থার, িবিনেয়ােগ, পািটর্শন, পৰ্বতর্েনরবছর জেন্মর, আড়াই, পাঁচ, সাত, টাকা, ৈদিনক, পৰ্ায়, িমিলয়ন, বছের, েগােয়ন্দােকএখন সিত , বুঝেতই, মজা, ভােলা, শুধু, তারেচেয়, শুনেত, কথাটা, বল, হয়েতাআবার িনেয়, আসেত, ফুল, েযন, গািড়টা, িচিন্তত, নামেত, খািনকটা, দুবর্ল, সিবতামেতা িদেল, পুেরা, িফিলপস, পৰ্চন্ড, েদিখেয়, জেল, েখলা, পৰ্থমবার, সামান , েচহারারকাজ ব বহার, আেলাচনা, রক্ষা, েসসব, সহজ, পিরহার, িনিশ্চত, পৰ্েয়াগ, বােজট, ব াখ ােদখা পাওয়া, জানা, পাহারায়, েবঁেক, কাঁচুিল, েব্লজােরই, উদুর্েক, শহরগুেলার, ঝেরতুিম, েগেয়-31-4.4 Experiment III: FastText Skip-gram modelIn this section we show the word embedding constructed by implementing FastText Skip-grammodel. The table below shows the optimal results of this model acquired by keeping the vectorsize at 100 while the window size was 5 and constructed after 5 iterations. Here optimal result wasfound with smaller vector size then other models. As can be seen from the examples below, thecluster contains mostly inflections or different forms of pivot word. Here most similar 10 wordsto the pivot word is shown.Table 4.5: Results from FastText Skip-gram modelRandomWord Words on clusterআমরা আমরা৷, কীআমরা, নয়আমরা, আমরাই, আমরােতা, হয়আমরা, িকআমরা, েহাকআমরা, আমর, েলআমরাতাঁর তাঁর৷, তাঁরই, তাঁরও, তাঁহা, তাঁরা, তাঁতীও, তাঁরাই, তাঁ, তাঁেকসহ, ওঁরজন জন ৷, জেন ঃ, জন ও, জেন ৷, েসৗজন , জেন , এজন , জন ই, জিন , এরজনেকান েকানস, েকা , েকােনাা, েকােনাও, েকােনা, েকানডা, েকা , েকােনাই, েকানও, েকানইপাের পােরা, পাের৷, পােরএ, পােরতখন, পােরঃ, পােরআর, পােরএমন, পােরনই, পােরন৷, ঐপােরহেত ঝেত, লখনউেত, ধেত, ইইউেত, নড়েত, নেত, েপেত, চড়েত, অইেত, ওেতবড় বড়বড়, বড়র, হড়বড়, বড়ও, বড়ইর, বড়েছাট, েছাটবড়, েছাটেছাট, েছাট, নড়বড়টাকা টাকা৷, দশটাকা, টাকায়ও, টাকাসহ, হাজারগুণ, দুইটাকা, হাজারও, দুটাকা, হাজারিদঘী, টাকাকীনতুন নতুননতুন, নতুনতর, নতুন৷, নতুনই, নত, নতূন, ৈজতুন, িনত নতুন, নতুনরা, চালুআমােদর েযআমােদর, নাআমােদর, হেবআমােদর, দাদাআমােদর, িছলআমােদর, সৎমােদর, তমােদর, মােদর, েরামােদর, অমােদরএকিট একিট, একিটদুিট, েযএকিট, একট, একিটসহ, একএকিট, একিটই, একিটভ, েনইএকিট, একিটওআিম আিমই, আিমও, আমও, আিমআিম,আিম, আআিম, আিমষ, কীআিম, আিমেতা, ওেকআিমযায় যায়, যায়আেস, যায়েস, যায়ই, যাঃ, যােব, যােবএই, যািচছ, য়যায়, যাবকেরন কেরনিন, কেরনঃ, কেরন, কেরিন৷, কেরনও, কেরেতন', কেরেছন, কেরণ, কেরিছেলন, কেরিনবরংবছর বছ, চবছর, বছর৷, নছর, ছরছর, দশবছর, বছরকেয়ক, গতবছর,এখন এখেনৗ, এখেনা, এখনকী, এখনেতা, এখনই৷, কীএখন, এখনএই, এখনই, এখেনা, এখিনআবার নাআবার, তারপরআবার, েতর্আবার, আববার, আেরকবার, তারআবার, ডাকবার, েবরুবার, চকবার, েফরবারমেতা মেতাও, মেতা৷, রমেতা, মেতাই৷, জমেতা, মেতানই, মেতাই, এইমেতা, মাপমেতা, মেতানকাজ কাজ৷, কাজও, কােজা, কাজই, কাজটাজ, ওকাজ, কাজিটই, কাজাখ, সৎকাজ, কাজিটেদখা অেদখা, েদখাইত, েদখাক, েদখায়ই, েদখাসহ, েদখায়া, েদখাএইসবই, েদখাত, েদখােবা, েদখােবা-32-4.5 Experiment III: FastText CBOWmodelIn this section we show the word embedding constructed by implementing FastText CBOWmodel. The table below shows the optimal results of this model acquired by keeping the vectorsize at 100 while the window size was 5 and constructed after 5 iterations. Here, like the fasttextskip-gram model, optimal result was found with smaller vector size then other models. As can beseen from the examples below, although the cluster contains mostly inflections or different formsof pivot word it also contains some noise. Here most similar 10 words to the pivot word is shown.Table 4.6: Results from FastText CBOW modelRandomWord Words on clusterআমরা</s>
|
<s>আমরাযিদও, আমরাই, আপনােকও, আমরাও, বেলিছআমরা, আপনা, কীআমরা, কীটও, আপনােকই, কীটসতাঁর পুনঃআেলাচনার, সুেলাচনার, আলাপআেলাচনার, েকস্তার, তাঁরই, কাইয়ুমআেলাচনার, সৃিষ্টশীলতার, িধক্কার, মিনকার, িবঘারজন জন ও, জন ৷, জন্স, েসৗজন , এজন , জেন ঃ, েসৗজন ঃ, জন ই, জেন ৷, তজ্জনেকান েকােনাও, েকােনাা, েকােনাই, েকােনা, লুেকােনা, েথান, েকানস, েকা , েকা , েকােনািটইপাের পােরআর, পােরতখন, পাের৷, পােরএ, পােরা, পারডন, পােরঃ, পােরও, পারৱূর, পারদহেত কইরেত, ঝরেত, ভরেত, কসরেত, েখারেত, শরেত, মরেত, ধরেত, ঠকেত, কুদরেতবড় বড়র, বড়ও, হড়বড়, বড়ইর, বড়ই, বড়সড়, গড়বড়, বড়সেড়া, নড়বড়, বড়বাড়ীটাকা দশটাকা, ওসাকা, শলাকা, েপাঁদপাকা, গাঢাকা, টাকা৷, জলধাকা, ইয়াকা, হাজারী, এলাকানতুন নতুননতুন, নতুন৷, 'নতুনই, েফারামিট, অনুিষ্ঠত, পুনগর্িঠত, িতনচারিট, উৎকিন্ঠত, পিরচালকমণ্ডলীর, ভূখণ্ডিটআমােদর েতামােদর, নাআমােদর, েযআমােদর, এেদরও, তাঁেদরও, েখেদরও, ইহাঁেদরও, এঁেদরই, আপনােদর, তাঁেদরইএকিট একএকিট, ইবুকিট, ছকিট, ৈবঠকিট, েপাষাকিট, যুগিট, ফলকিট, সড়কিট, সূচকিট, িলংকিটআিম আিমআিম, আিম, আিমষ, আিমই, ওেকআিম, আিমরউল, আিমও, কীআিম, আআিম, েতামােকআিমযায় যায়যায়, যায়িন, যায়েমজর, যায়এক, যায়ও, যায়, যায়গা, যায়েস, যায়এই, যায়এসবকেরন কেরিন, কেরিনবরং, কেরনিন, কেরনও, কেরনঃ, কেরন, কেরেছন, কেরনিন, কেরিন, কেরেতনবছর চবছর, বছ, বছর, বছরর, বছর, নছর, ছরছর, দশবছর, বছেররই, বছরইএখন এখেনা, এখনই, এখেনৗ, এখনকী, কীএখন, এখনএই, এখনই, িকেতা, িকটস, িকষাণআবার তারপরআবার, েদখবার, েদিখবার, নাআবার, তারপরসব, েধাবারেজাড়, দশবােরা, খুঁজবার, েদখাবার, তারপেরামেতা জুতমেতা, খুদাই, আশাও, েহােতা, রুবাই, আভাই, দুকথা, গচ্ছািম, গুঁেতা, খুিশমেতাকাজ ব বহারটা, অকাজ, েসেচষ্টা, ব বহারেক, ব বসােক, ধষর্ণেচষ্টা, েচষ্টাটুকু, সহজই, কষ্টও, িনেশ্চষ্টেদখা অেদখা, েদখায়ই, েদখাসহ, েদখাইত, েদখাএইসবই, েদখাত, েদখাক, েদখােবা, েদখায়া, েদখাও-33-4.6 Training TimeAn important aspect of comparing these models are the measure of training time needed foreach model. In this section we show the time that was required to train each model on our dataset.The size of our corpus is 59856174 bytes. We used a computer with 4GBRAM, core i3-3110MCPU. Training Time for each experiment is shown in table 4.7. The graphical representation is alsoshown in figure 4.1.As can be seen from the table the training time varied from model to model while almost sametime was needed for building the both variations of the same model.Table 4.7: Training Time of the ExperimentsExperiment Training TimeExp I: Word2Vec in Tensorflow 18 minutesExp II: Gensim Word2Vec- Skip gram Model 30 minutesExp II: Gensim Word2Vec- CBOWModel 32 minutesExp III: FastText- Skip gram Model 23 minutesExp III: FastText- CBOWModel 24 minutesFigure 4.1: Training Time-34-4.7 Comparing The Word Embedding ModelsFrom the given examples of the clusters produced by the three different models we can cometo some decisions. If we consider clusters containing similar and synonymous words Word2Vecimplementation of Tensorflow gives good results. Comparing the two variations of FastTextmodel,the FastText-Skip Gram model is the best because it gives all the inflections of a specific word.The FastText-CBOW model do not produce such accurate results, rather it contains more noisyoutput than FastText-Skip Gram model’s output. Gensim library based skip-gram model givescontextually similar words but fails to give inflection of words. The CBOW architecture of thismodel does not produce good clusters rather it gives noisy output. So from evaluating the results ofthese models, we can come to the conclusion that FastText-Skip Gram model is the more accurateand efficientmodel for building Bangla word clusters and consequently themore appropriatemodelfor building the Bangla wordNet.FastText uses n-grams of a word and create</s>
|
<s>vectors for the sum of all the n-grams of the word.As a result it can produce output even if the word is not in the corpus. But the other approaches cannot generate results for an unknown word. Though if we want to get cluster for a unknown wordfrom FastText model, there was no satisfactory results but it did gave something. If the dataset canbe prepared properly for the FastText skip gram model, we think it will produce really amazingand much more accurate word clusters. For our work, we proceed to build the WordNet structurewith the clusters constructed by the FastText-Skip Gram model.4.8 Hierarchy BuildingThe constructed word embeddings give us an idea about how the words are connected andhow to establish a relationship among them. In the previous section we have seen that five differ-ent methods were applied to find a suitable method for building the hierarchical structure of theWordNet. The cluster results and the discussions in the result analysis chapter show that the Fast-Text Skip-gram model gives the best results in case of including the inflection or different form ofthe target word. As we are building the WordNet structure from scratch, the words first need to beconnected by their inflections. And here FastText Skip-gram model is the better suited model forthat purpose. So we have chosen the embeddings produced by FastText Skip-gram model to build-35-the WordNet hierarchical structure. The structure is shown below.Figure 4.2: Hierarchy of Word relations along with cosine similarityAs we can see from the given structure, starting with the target word 'ভাল', its most similar tenwords are connected to it making the first layer of the WordNet network. Next most similar fivewords of each of those ten words connect to its specific target word making the second layer ofthe WordNet network. The similarity is measured here by the cosine similarity between the words.This similarity was established by the dynamic word embedding models and it is used here as ameasure for relativity between the words. For the ease of representation we have only shown thestructure for one word here. Repeating this process for all the words in the corpus the completeWordNet structure will build up.4.9 Adding Details to Hierarchy StructureAs discussed before, a complete WordNet contains the detailed information of the words aswell as the connection among the words. The structure shown in the previous section is only thebackbone of the WordNet. It has connected the words and constructed the network. But to make ita WordNet, we need to add details information like meaning, definition, uses, parts of speech andother grammatical information of all the words to the backbone structure. The process followedfor doing this was discussed in the methodology chapter. Now in this section we will shed light onthe outcome of that process.We give an graphical representation of the output of this process in the following figure.-36-Figure 4.3: Hierarchy of Word relations along with cosine similarityAs can be seen from the figure comparing it with the previous figure, for the target word 'ভাল',its detailed information obtained via</s>
|
<s>dictionary parsing is now added to the hierarchy structure. Sothe structure now contains the details of the word as well as the connected network. That is theWordNet structure. Here for the ease of representation only the details of the root word or targetword is shown. But this process is done for all the words present in the network to complete thestructure. Repeating this step for all the words present in the in the corpus, we get the completeWordNet structure.-37-Chapter 5Discussion5.1 DiscussionComplete construction of a WordNet from scratch for any language is a huge work. Our targetwas to construct a Bangla wordnet where the words will be related with their synonymous wordsand their inflection. For this we tried different dynamic word embedding models with variouscombination of vector size and window size to compare the constructed embeddings. As this isthe basic of building the WordNet structure, finding an appropriate method for doing this was animportant step of our implementation.From observing the results for the constructed embeddings we can see that although they givegood results, it can be improved by working with a bigger corpus covering vast areas of topic. Astandard Bangla dataset is still scarce. If this process can be repeated with a standardized datasetbetter results can be obtained.Another step was getting data from the Bangla dictionary. If the work is done with a better andmore well processed dictionary, the process will improve further.Previously WordNet building process was by machine translation of the Princeton WordNet.But our work shows that building theWordNet through word clustering process can be a promisingmethod for WordNet construction for any language.-38-Chapter 6ConclusionA Bangla wordnet will be a big contribution in the field of Bangla natural language process-ing. This will open the doors to many promising contribution in many sectors of natural languageprocessing.We had started our work with the goal to improve efficiency from the previous works and usedeep learning methods to achieve better performance in word embedding to construct the Banglawordnet. In order to do so, We have compared different dynamic word embedding models forBangla and received satisfactory results. We have also shown a comparison between these modelswhich provides important insights on Bangla word embedding. This can help in further researchworks concerning Bangla word embedding process and the development of dynamic word embed-ding models for Bangla language.The process shown in our implementation can be a promisingmethod forWordNet constructionfor any language. Most of WordNet construction process follows the PrincetonWordNet construc-tion process, but this method proposes a new approach to WordNet construction and its outcomeis quite satisfactory. Further work in this process will give new insights on this and can be a bigstep towards the digitalization of Bangla language.-39-References[1] T. T. Urmi, J. J. Jammy, and S. Ismail, “A corpus based unsupervised bangla word stemmingusing n-gram language model,” in Informatics, Electronics and Vision (ICIEV), 2016 5thInternational Conference on. IEEE, 2016, pp. 824–828.[2] F. Faruqe and M. Khan, “Bwn-a software platform for developing bengali wordnet,” in Inno-vations and Advances in Computer Sciences and Engineering. Springer, 2010, pp. 337–342.[3] K. T. H.</s>
|
<s>Rahit, M. Al-Amin, K. T. Hasan, and Z. Ahmed, “Banglanet: Towards a wordnet forbengali language.”[4] P. Bhattacharyya, “Indowordnet,” in The WordNet in Indian Languages. Springer, 2017, pp.1–18.[5] “The 10 Most Spoken Languages In The World,” https://www.babbel.com/en/magazine/the-10-most-spoken-languages-in-the-world/, accessed: 2018-07-01.[6] “Princeton Wordnet,” https://wordnet.princeton.edu/, accessed: 2018-07-01.[7] S. K. N. Alok Ranjan Pal, Diganta Saha, “Word sense disambiguation in bengali: A knowl-edge based approach using bengali wordnet.”[8] S. Ismail and M. S. Rahman, “Bangla word clustering based on n-gram language model,”in Electrical Engineering and Information & Communication Technology (ICEEICT), 2014International Conference on. IEEE, 2014, pp. 1–5.[9] M. Sinha, T. Dasgupta, A. Jana, and A. Basu, “Design and development of a bangla semanticlexicon and semantic similarity measure,” International Journal of Computer Applications,vol. 95, no. 5, 2014.-40-[10] L. Yuan, “Word clustering algorithms based on word similarity,” in Intelligent Human-Machine Systems and Cybernetics (IHMSC), 2015 7th International Conference on, vol. 1.IEEE, 2015, pp. 21–24.[11] A. Al Hadi, M. Y. A. Khan, and M. A. Sayed, “Extracting semantic relatedness for banglawords,” in Informatics, Electronics and Vision (ICIEV), 2016 5th International Conferenceon. IEEE, 2016, pp. 10–14.[12] A. Ahmad and M. R. Amin, “Bengali word embeddings and it’s application in solving docu-ment classification problem,” in Computer and Information Technology (ICCIT), 2016 19thInternational Conference on. IEEE, 2016, pp. 425–430.[13] V. Rengasamy, T.-Y. Fu, W.-C. Lee, and K. Madduri, “Optimizing word2vec performanceon multicore systems,” in Proceedings of the Seventh Workshop on Irregular Applications:Architectures and Algorithms. ACM, 2017, p. 3.[14] L. Ma and Y. Zhang, “Using word2vec to process big text data,” in Big Data (Big Data), 2015IEEE International Conference on. IEEE, 2015, pp. 2895–2897.[15] R. Bamler and S. Mandt, “Dynamic word embeddings,” arXiv preprint arXiv:1702.08359,2017.[16] P. Bojanowski, E. Grave, A. Joulin, and T. Mikolov, “Enriching word vectors with subwordinformation,” Transactions of the Association for Computational Linguistics, vol. 5, pp. 135–146, 2017.[17] W. Black, S. Elkateb, H. Rodriguez, M. Alkhalifa, P. Vossen, A. Pease, and C. Fellbaum,“Introducing the arabic wordnet project,” in Proceedings of the third international WordNetconference. Citeseer, 2006, pp. 295–300.[18] Z. Vetulani and B. Kochanowski, “” polnet-polish wordnet” project: Polnet 2.0-a short de-scription of the release,” in Proceedings of the Seventh Global Wordnet Conference, 2014,pp. 400–404.[19] L. Kashyap, S. R. Joshi, and P. Bhattacharyya, “Insights on hindi wordnet coming from theindowordnet,” in The WordNet in Indian Languages. Springer, 2017, pp. 19–44.-41-[20] N. R. Ram and C. N. Mahender, “Marathi wordnet development,” International Journal OfEngineering And Computer Science, vol. 3, no. 08, 2014.[21] I. K. A. N. Malhar Kulkarni, Chaitali Dangarikar and P. Bhattacharyya, “Introducing sanskritwordnet,” 2010.[22] T. Pedersen, S. Patwardhan, and J. Michelizzi, “Wordnet:: Similarity: measuring the re-latedness of concepts,” in Demonstration papers at HLT-NAACL 2004. Association forComputational Linguistics, 2004, pp. 38–41.[23] Z. Elberrichi, A. Rahmoun, and M. A. Bentaalah, “Using wordnet for text categorization.”International Arab Journal of Information Technology (IAJIT), vol. 5, no. 1, 2008.[24] R. Mandala, T. Takenobu, and T. Hozumi, “The use of wordnet in information retrieval,”Usage of WordNet in Natural Language Processing Systems, 1998.[25] J. Gharat and J. Gadge, “web information retrieval using wordnet,” International Journal ofComputer</s>
|
<s>Applications, vol. 56, no. 13, 2012.[26] S. Kolte and S. Bhirud, “Wordnet: a knowledge source for word sense disambiguation,”International Journal of Recent Trends in Engineering, vol. 2, no. 4, 2009.[27] M. A. Al Mumin, A. A. M. Shoeb, M. R. Selim, and M. Z. Iqbal, “Sumono: A representativemodern bengali corpus.”[28] E. Grave, P. Bojanowski, P. Gupta, A. Joulin, and T. Mikolov, “Learning word vectors for157 languages,” in Proceedings of the International Conference on Language Resources andEvaluation (LREC 2018), 2018.-42-Appendix APaper Published on Previous Work-43-Performance Analysis of Different WordEmbedding Models on Bangla LanguageZakia Sultana RituComputer Science andEngineeringShahjalal University ofScience and TechnologySylhet, Bangladeshzakiaritu.cse@gmail.comNafisa NowshinComputer Science andEngineeringShahjalal University ofScience and TechnologySylhet, Bangladeshnafisanowshin107@gmail.comMd Mahadi Hasan NahidComputer Science andEngineeringShahjalal University ofScience and TechnologySylhet, Bangladeshnahid-cse@sust.eduSabir IsmailComputer Science andEngineeringStony Brook UniversityNew York, United Statessabir.ismail@stonybrook.eduAbstract—In this paper we discuss the performance of three-word embedding methods on Bangla corpus. Word embeddingis a big part of natural language processing related researchworks. Many research works have focused on finding appropriatemethods of word clustering process. Previously N-gram modelswere used for this purpose but now with the improvement ofdeep learning methods, dynamic word clustering models arepreferred because they reduce processing time and improvememory efficiency. In this paper we discuss the performance ofthree word embedding models namely, word2vec in Tensorflow,word2vec from Gensim package and FastText model. We usesame dataset on all the model and analyze the outcomes. Thesethree models are applied on a Bangla dataset containing 5,21,391unique words to produce the clusters and we evaluate theirperformance in terms of accuracy and efficiency.Keywords—Natural Language Processing(NLP) , machinelearning, deep learning, word cluster, word embedding, Banglaword clustering, word2vec, fasttext, skip-gram, CBOW, GloVe.I. INTRODUCTIONBangla is a major world language and it will increase moreimportant in the coming years to come with the digitization ofthe Bangla language. As the importance of this language growsso does the research works concerning Bangla language. Oneof the major sectors of Bangla natural language processinginvolves establishing a method for producing fast and accurateword clusters. Much work is being done in this sector andcontinuous effort is being given to find an appropriate methodfor producing word clusterings. In this paper we attempt toapply three dynamic word clustering models to compare theirperformance in producing Bangla word clusterings.Word clustering can be referred to as a technique forpartitioning sets of words into subsets of semantically similarwords. This has a far reaching effect in many NLP relatedworks. It is increasingly becoming a major technique used ina number of NLP tasks ranging from word sense or structuraldisambiguation to information retrieval and filtering. To reachany decision about the performance of a model in producingword clusterings, we need to evaluate its performance byapplying it to a large dataset and compare the results. Butworking with large dataset means more run time and conse-quently less efficiency. On the other hand, if it is done withsmaller dataset, the clusters won’t be accurate. So in choosinga word clustering method our main goal is to reduce run timeand increase efficiency in producing accurate word clusters.Word clusters include semantically similar words, meaningit will group those words that are similar</s>
|
<s>in meaning andtend to occur in similar contexts in natural language. Thereare many approaches to compute semantic similarity betweenwords based on their distribution in a corpus. Much researchwork has been done to find an efficient and accurate modelfor building word clusters. Although at first N-gram modelswere used to construct word clusters, in recent times withthe improvement of deep learning methods dynamic modelshave become more popular in building word clusters. Dynamicmodels construct the vector representation of words and buildclusters from them. In this paper we discuss the performanceof three variations of dynamic word clustering models, whichare, word2vec in Tensorflow, word2vec from Gensim packageand FastText model in case of constructing Bangla wordclusters.This paper is arranged as follows, in section 2, we haveshed light on some of the previous approaches to constructword clustering in Bangla and other languages and theirperformance. In section 3 we have discussed about the datasetthat we have used in evaluating the performance of the models.In section 4 we discuss in brief the full methodology appliedin our work. Then in section 5 we present an analysis ofthe results we got from applying these three models andevaluate their performance in producing accurate Bangla wordclusterings. We conclude in section 6.II. BACKGROUND STUDYWord clustering is an important aspect of dealing withlarge datasets in research work. So, much research work hasbeen done with a view to finding appropriate methods forconstructing word clustering in Bangla and in various otherlanguages. We will discuss some of these works below.Previous word clustering techniques mostly involved usingN-gram model to construct the clusters. This can be observedin the works of Ismail and Rahman [1], who proposed a Banglaword clustering method based on N-gram Language Model.In this paper they tried to cluster bangla word using theirsemantic and contextual similarity. In this approach they triedto cluster the words based on the idea that, the words that havesimilar meaning and are used in similar context in a sentence,belong to the same cluster.Their work was slightly upgraded later by Urmi, Jammy andIsmail [2]. They proposed a unsupervised learning approachto identify stem or root of a Bangla word from contextualsimilarity of words. Their object was to build a big corpus ofBangla stems along with their respective inflectional form.They worked with the assumption that if two words aresimilar in spelling and are used in similar context in manysentences, they have a higher chance of originating from thesame root. They implemented 6-gram model for stem detectionand achieved an accuracy of 40.18%. They have concludedthat with big amount of text data this model will improvefurther.Researchers then focused on producing word clusteringin dynamic approach and its performance. We get insightsabout this from the works of Yuan [3], who showed thatword clustering technique that is based on word similarities isbetter than conventional greedy approach in terms of speedand performance. The basic approach of this work was tocheck for a certain word in the corpus, its co-occurring wordsfor similarity. That is to say, if two words are similar, theirco-occurring word pattern will also be similar. Based onthis they computed word clusters</s>
|
<s>and when compared withother clustering methods, this approach was found to be moreefficient.The performance of dynamic models in producing Banglaword clusters was shown by Ahmed and Amin [4]. Theydiscussed the effect of Bangla word embedding model in docu-ment classification. They worked with a dataset prepared fromBangla newspaper documents. They applied word2vec modelto generate vector representation of words for word clustering.Using this they prepared clustering of word embeddings thatare found in close proximity to each other in feature space.This information was later used as features to solve Bangladocument classification problem.Altszyler, Sigman and Slezak [5] tried to find out if LSAand word2vec model’s capacity to identify relative dimensionincreases with increase in data. They found out that Word2veccan take advantage of all types of documents while LSA onlygives better performance when out-of-domain documents areremoved from corpus.In case of Arabic language Soliman, Eissa and El-Beltagy[6], found that the performance measure of word2vec differsfrom dataset to dataset but on each dataset it shows goodperformance in capturing similarity among words.Upgrading the performance of word2vec in finding vectorrepresentation of words in huge datasets like a dataset con-taining one billion words were attempted by Rengasamy, Fu,Lee and Madduri [7]. They applied word2vec in a multi-coresystem and found that this approach is 3.53 times faster thanoriginal multi-threaded word2vec implementation and 1.28times faster than recent parallel word2vec implementation.Ma and Zhang [8] discussed the effect of word2vec inreducing the dimensionality of large datasets. They foundout that, in dealing with large scale training data, word2vechelps in clustering similar data. This strategy can reduce datadimension and speed up multi-class classifications.With the goal of preparing vector representation of words,Naili, Chaibi, Ghezala [9], applied LSA, Word2vec and GloVeon both English and Arabic language. They reached the con-clusion that although all three methods performance dependon the language used, among the three, word2vec gives thebest vector representation of words.Robert Bamler and Stephan Mandt [10] tried to find thesemantic evolution of individual words over time in time-stamped datasets. They applied Word2vec model to producethe embedding vectors. They showed experimentally that bothskip-gram filtering and smoothing lead to smoothly changingembedding vectors that help predict contextual similarities atheld out time stamps.Fasttext model is a relatively new model ventured inproducing word clusterings. It is a variation of skip grammodel architecture of word2vce model which was proposedby Bojanowski, Grave, Joulin and Mikolov [11]. The methodthey followed was, each word was represented as a bag ofcharacter n-grams and vector representation was constructedfrom them. This allowed them to construct word clusters forwords not present in the training data. They concluded thatthis method gives state of the art word representations forboth similarity and analogy task.Finally, we can say that there is rich literature growingon word embedding techniques and there is much scope ofimproving in this sector.III. DATA COLLECETIONWe used three separate corpus, and merged them. Firstcorpus is, SUMono [12] which contains available online andoffline Bangla text data. We also used a news corpus, whichcontains news data from Bangla news websites. We alsoused Bangla wiki data from wikipedia. The detail is givenin table I. Accuracy of any</s>
|
<s>model largely depends on thedataset it is applied on. If a word is used in various kindof sentences, then the trained model can be more accurateas it covers a large area of variety. The more frequent thewords are, the more accurate the model will be. This corpuscontains Bangla text data on various topics. Because thecorpus was built taking contents from various sources likeBangla articles from wikipedia, Bangla news portals and fromwritings of different renowned Bengali writers. As the textis collected from various sources it covers topics of differentkinds and sectors as well as the language structure of Banglaused in day to day life. This is an important aspect of thedata collected, because to get better and accurate clustersdynamically we need data that covers vast areas in which aspecific word can be used.Figure I represents some of the most frequent words from thecorpus. And detailed information of the corpus is representedin table I.Figure 1. Histogram of Most Frequent Words with Number of OccurrencesTable IDetails of the CorpusTotal sentences 1,593,398Total words 2,51,89,733Unique words 5,21,391IV. METHODOLOGYOver the years many techniques have been introduced forword clustering. Most of the approaches give high accuracyfor clustering English words. As Bangla is a more complicatedlanguage, it is hard to gain high accuracy. We attempted threedifferent approaches to determine which approach works betterfor Bangla word clustering. Nowadays vector representation ofwords have become the most popular approach for buildingword clusters. We applied three machine learning approacheswhich are based on vector representation of Words. Wefollowed some steps to train our datasets. these steps arediscussed below.A. The Basic Steps• Corpus: The corpus is stored as a text file. In the textfile the data constitutes of Bangla sentences. They can betreated as strings. But We can not feed the strings directlyto this model, so some pre-processing was done on thedataset.• Tokenizing: We had to pre-process the corpus in orderto use them as our proper input. Firstly, we couldn’tfeed a word just as a text string to a model. For this, wetokenized each word. Example:আমার সােথ বাংলায় কথা বল => 'আমার', 'সােথ', 'বাংলায়','কথা', 'বল'• Training: We used the tokenized dataset for training. Wecompared the output from each model individually byusing different window sizes, vector sizes and iterationsover the dataset.B. Our Experiments1) Experiment I: Word2vec in Tensorflow: We used theofficial version of sentence embedding implementation oftensorflow. The code generates word clusters based on thefeatures of Word2vec using the Skip-Gram model and theNegative Sampling accelerated classification algorithm. Wetried different window sizes, vector sizes and iterations.2) Experiment II: FastText Model: Facebook’s AIResearch lab created the FastText library for word embeddingand text classification. Both Skip gram and CBOW modelcan be trained with FastText. For the training of Skip Grammodel, we kept the window size at 5 and vector size was100. we trained the CBOW model as well.3) Experiment III: Word2vec from Gensim package: Thepython library Gensim provides Word2Vec class for workingwith word embedding. We tuned the parameters and checkedresults both for Skip gram and CBOW model.C. Training TimeThe size of our corpus is 59856174 bytes. We used</s>
|
<s>acomputer with 4GB RAM, core i3-3110M CPU. Training Timefor each experiment is as follows-Table IITraining Time of the ExperimentsExperiment Training TimeWord2Vec in Tensorflow 18 minutesFastText- Skip gram Model 23 minutesFastText- CBOW Model 24 minutesGensim Word2Vec- Skip gram Model 30 minutesGensim Word2Vec- CBOW Model 32 minutesV. RESULT ANALYSISThere are similarities among the clusters from three dif-ferent approaches as well as some significant differences. Forsome words, we got a similar set of words for a set of models.But the variety was also notable. We tried to get the mostsatisfactory results from each approach. We applied variouscombination of window and vector sizes in order to tune theparameters to get the most optimum and satisfactory results.Table III shows parameter tuning for each approach.Table IIIParameter Tuning for Optimum ResultsExperiment Window size Vector size IterationI- Word2vec in Tensorflow 4 1000 10II- Skip gram Model 5 100 5II- CBOW Model 5 100 5III- Skip gram Model 5 400 5III- CBOW Model 5 400 5Some sample results from the experiments are given in tableIV, V, VI, VII and VIII.Table IVResults from Experiment IRandom Word Words on clusterআমরা আমােদর, আিম, চাই, যখন, তাই, তারা, িক, েসই, িকন্তু,সবাইতাঁর তার, েসই, সােথ, তাঁেদর, একজন, একই, ওই, িতিন, পের,বেলজন পৰ্েয়াজন, সুেযাগ, জেন , পাশাপািশ, তাই, দরকার, কারণ,িকছু, িকন্তু, তােদরেকান েকােনা, এমন, অন , কারণ, তেব, েসটা, বা, েনই, তাই,এখেনাপাের হেব, পারেব, পাির, হেল, পােরন, হেতা, চাই, চায়, পােরিন,থােকহেত েযেত, থাকেত, করেত, হেল, না, রাখেত, তাই, তাহেল,তেব, িদেতবড় সবেচেয়, অবস্থা, খুবই, খুব, আমােদর, অেনক, িকছু, মেতা,মানুেষর, আেছটাকা হাজার, লাখ, েকািট, টাকার, খরচ, পাঁচ, মাতৰ্, িবিকৰ্, িতন,পৰ্ায়নতুন মাধ েম, ৈতির, কাজ, জন , িবিভন্ন, নানা, একিট, এই, সব,একইেদখা অেদখা, েদখাইত, েদখাক, েদখায়ই, েদখাসহ, েদখায়া,েদখাএইসবই, েদখাত, েদখােবা, েদখােবাTable VResults from Experiment II-Skip GramRandom Word Words on clusterআমরা আমরা৷, কীআমরা, নয়আমরা, আমরাই, আমরােতা, হয়আ-মরা, িকআমরা, েহাকআমরা, আমর, েলআমরাতাঁর তাঁর৷, তাঁরই, তাঁরও, তাঁহা, তাঁরা, তাঁতীও, তাঁরাই, তাঁ,তাঁেকসহ, ওঁরজন জন ৷, জেন ঃ, জন ও, জেন ৷, েসৗজন , জেন , এজন ,জন ই, জিন , এরজনেকান েকানস, েকা , েকােনাা, েকােনাও, েকােনা, েকানডা, েকা ,েকােনাই, েকানও, েকানইপাের পােরা, পাের৷, পােরএ, পােরতখন, পােরঃ, পােরআর,পােরএমন, পােরনই, পােরন৷, ঐপােরহেত ঝেত, লখনউেত, ধেত, ইইউেত, নড়েত, নেত, েপেত,চড়েত, অইেত, ওেতবড় বড়বড়, বড়র, হড়বড়, বড়ও, বড়ইর, বড়েছাট, েছাটবড়,েছাটেছাট, েছাট, নড়বড়টাকা টাকা৷, দশটাকা, টাকায়ও, টাকাসহ, হাজারগুণ, দুইটাকা,হাজারও, দুটাকা, হাজারিদঘী, টাকাকীনতুন নতুননতুন, নতুনতর, নতুন৷, নতুনই, নত, নতূন, ৈজতুন,িনত নতুন, নতুনরা, চালুেদখা অেদখা, েদখাইত, েদখাক, েদখায়ই, েদখাসহ, েদখায়া,েদখাএইসবই, েদখাত, েদখােবা, েদখােবাTable VIIIResults from Experiment III- CBOWRandom Word Words on clusterআমরা ইয়াকর্েদর, যতই, েপঁৗছােত, েহাক, িলডাররা, েশয়ািরেঙর,অবশ ই, কথাও, িচন্তা, দুবারইতাঁর তাঁরা, ৈতিরেত, েকা, সারা, বজর্েনর, সৃিষ্টেত, ভাইরাস,অপিরেশািধত, কলফিন, িলিপরজন যুিক্ত, িচিহ্নত, েজারদার, পিরেবশ, আশব্াস, পৰ্কাশ, সমা-েলাচনা, অন্তভূর্ক্ত, পৰ্ত াখ ান, পৰ্িতেরাধেকান েকােনা, পৰ্ত য়, িকছুরই, কারণ, বইেত, েতমনভােব,আধ ািত্মকতা, সাবেজেক্ট, ঘেটিন, ইন্ডািস্টৰ্রপাের ঈষর্ায়, দগ্ধ, অিভবাসী, পাঠ বই, পারেবন, পারত, পারেব,েপেরিছল, অসব্ীকৃিত, নািমজউিদ্দনহেত হেতই, থাকেত, িদেত, যাইেত, িফল্মগুেলা, েপেত, িনেত,সািজয়া, ঘটােত, জানেতবড় ফুেটা, েশ্লষ্মা, দরজাটা, আেলা, পাথেরর, েলেজর, চওড়া,রাস্তা, অন্ধকার, খােটাটাকা বছেরর, 'ছয়, চলাচেলর, দশ, সপ্তােহর, পূবর্বেঙ্গ, লাখ,েজলার, উপলেক্ষ, গান্ধীরনতুন গান্ধীেক,পািকস্তান, বণ্টন, কৰ্য়, মন্তব , িশক্ষা, পিরকল্পনার,অিধগৰ্হণ, মাউন্টব ােটন, পৰ্স্তািবতেদখা পাওয়া, জানা, পাহারায়, েবঁেক, কাঁচুিল, েব্লজােরই, উদুর্েক,শহরগুেলার, ঝেরতুিম, েগেয়Table VIResults from Experiment II- CBOWRandom Word Words on</s>
|
<s>clusterআমরা আমরাযিদও, আমরাই, আপনােকও, আমরাও, বেলিছআম-রা, আপনা, কীআমরা, কীটও, আপনােকই, কীটসতাঁরপুনঃআেলাচনার, সুেলাচনার, আলাপআেলাচনার, েকস্তার,তাঁরই, কাইয়ুমআেলাচনার, সৃিষ্টশীলতার, িধক্কার, মিনকার,িবঘারজন জন ও, জন ৷, জন্স, েসৗজন , এজন , জেন ঃ, েসৗজন ঃ,জন ই, জেন ৷, তজ্জনেকান েকােনাও, েকােনাা, েকােনাই, েকােনা, লুেকােনা, েথান,েকানস, েকা , েকা , েকােনািটইপাের পােরআর, পােরতখন, পাের৷, পােরএ, পােরা, পারডন,পােরঃ, পােরও, পারৱূর, পারদহেত কইরেত, ঝরেত, ভরেত, কসরেত, েখারেত, শরেত, মরেত,ধরেত, ঠকেত, কুদরেতবড় বড়র, বড়ও, হড়বড়, বড়ইর, বড়ই, বড়সড়, গড়বড়,বড়সেড়া, নড়বড়, বড়বাড়ীটাকা দশটাকা, ওসাকা, শলাকা, েপাঁদপাকা, গাঢাকা, টাকা৷,জলধাকা, ইয়াকা, হাজারী, এলাকানতুন নতুননতুন, নতুন৷, 'নতুনই, েফারামিট, অনুিষ্ঠত, পুনগর্িঠত,িতনচারিট, উৎকিন্ঠত, পিরচালকমণ্ডলীর, ভূখণ্ডিটেদখা অেদখা, েদখায়ই, েদখাসহ, েদখাইত, েদখাএইসবই, েদ-খাত, েদখাক, েদখােবা, েদখায়া, েদখাওTable VIIResults from Experiment III- Skip GramRandom Word Words on clusterআমরা আিম, েতামরা, েসটা, তাহেল, পাির, এখেনা, হয়েতা,এখােন, কেরিছ, েতামােকতাঁর তাঁেদর, সব্াধীনতার, িযিন, েনন, িনজ, েমিডেকল, েদন,করিছেলন, পিরবােরর, যুদ্ধজন জেন , সুেযাগ, েচষ্টা, কােজ, উেদ্দেশ , মাধ েম, ব াপাের,ব বস্থা, করেল, পযর্ােয়েকান েকােনা, থাকার, ছাড়া, তােত, বসােনার, আপিত্ত, পৰ্েয়াজন,েতমন, উপায়, এমনপাের পারেব, চায়, পােরিন, পােরন, পারত, হেতা, পারব, বাধ ,চাই, পারেবনহেত েপেত, েযেত, থাকেত, রাখেত, আনেত, েদাকােনও, িনেত,ঘটেত, েবিশও, লাগেতবড় েছাট, সবেচেয়, িশেরানামায়, েমেয়, গায়েকরা, সুন্দর,েজারটা, িজিনস, মধ িবত্তেদর, েছেলটাকা েকািট, হাজার, পাঁচ, টাকার, লক্ষ, বরাদ্দ, লাখ, বছর, পৰ্ায়,নতুন িনযর্াতন, পৰ্িকৰ্য়া, জাতীয়, পূেবর্, মামলা, এলাকায়, েসনাবা-িহনী, অথর্ৈনিতক, িবচােরর, েজাটেদখা কেম, রেয়, ফুসফুেস, গেবষণায়, জানা, েবেড়, পাওয়া,েরাগী, রূপ, জীবদ্দশাFrom the given examples of the clusters produced by thethree different models we can come to some decisions. If weconsider clusters containing similar and synonymous wordsWord2Vec implementation of Tensorflow gives good results.Comparing the two variations of FastText model, the FastText-Skip Gram model is the best because it gives all the inflectionsof a specific word. The FastText-CBOW model do not producesuch accurate results. Gensim library based skip-gram modelgives contextually similar words but fails to give inflection ofwords. The CBOW architecture of this model does not producegood clusters rather it gives noisy output. So from evaluatingthe results of these models, we can come to the conclusion thatFastText-Skip Gram model is the more accurate and efficientmodel for building Bangla word clusters.FastText uses n-grams of a word and create vectors for thesum of all the n-grams of the word. As a result it can produceoutput even if the word is not in the corpus. But the otherapproaches can not generate results for an unknown word.Though if we want to get cluster for a unknown word fromFastText model, there was no satisfactory results but it didgave something. If the dataset can be prepared properly forthe FastText skip gram model, we think it will produce reallyamazing and much more accurate word clusters.VI. CONCLUSIONSBangla is a complex language with a wide range of vocab-ulary containing many rare words. Language structure, use ofcomplex words, multiple meanings in different context all ofthese reasons makes it really difficult to choose one model asthe best model for Bangla word clustering. The contents of thedataset also plays a big role in deciding this. We have triedto give some perspective on some of the dynamic approachesthat have been used for Bangla word clustering. Among themodels applied, we have reached the conclusion that FastText-Skip Gram model</s>
|
<s>produces the best result on the given dataset.We can get more accurate results by increasing the size of thedataset.References[1] S. Ismail and M. S. Rahman, “Bangla word clustering based on n-gram language model,” in Electrical Engineering and Information &Communication Technology (ICEEICT), 2014 International Conferenceon. IEEE, 2014, pp. 1–5.[2] T. T. Urmi, J. J. Jammy, and S. Ismail, “A corpus based unsupervisedbangla word stemming using n-gram language model,” in Informatics,Electronics and Vision (ICIEV), 2016 5th International Conference on.IEEE, 2016, pp. 824–828.[3] L. Yuan, “Word clustering algorithms based on word similarity,” inIntelligent Human-Machine Systems and Cybernetics (IHMSC), 20157th International Conference on, vol. 1. IEEE, 2015, pp. 21–24.[4] A. Ahmad and M. R. Amin, “Bengali word embeddings and it’s ap-plication in solving document classification problem,” in Computer andInformation Technology (ICCIT), 2016 19th International Conferenceon. IEEE, 2016, pp. 425–430.[5] E. Altszyler, M. Sigman, and D. F. Slezak, “Corpus specificity in lsaand word2vec: the role of out-of-domain documents,” arXiv preprintarXiv:1712.10054, 2017.[6] A. B. Soliman, K. Eissa, and S. R. El-Beltagy, “Aravec: A set of arabicword embedding models for use in arabic nlp,” Procedia ComputerScience, vol. 117, pp. 256–265, 2017.[7] V. Rengasamy, T.-Y. Fu, W.-C. Lee, and K. Madduri, “Optimizingword2vec performance on multicore systems,” in Proceedings of theSeventh Workshop on Irregular Applications: Architectures and Algo-rithms. ACM, 2017, p. 3.[8] L. Ma and Y. Zhang, “Using word2vec to process big text data,” in BigData (Big Data), 2015 IEEE International Conference on. IEEE, 2015,pp. 2895–2897.[9] M. Naili, A. H. Chaibi, and H. H. B. Ghezala, “Comparative study ofword embedding methods in topic segmentation,” Procedia ComputerScience, vol. 112, pp. 340–349, 2017.[10] R. Bamler and S. Mandt, “Dynamic word embeddings,” arXiv preprintarXiv:1702.08359, 2017.[11] P. Bojanowski, E. Grave, A. Joulin, and T. Mikolov, “Enriching wordvectors with subword information,” Transactions of the Association forComputational Linguistics, vol. 5, pp. 135–146, 2017.[12] M. A. Al Mumin, A. A. M. Shoeb, M. R. Selim, and M. Z. Iqbal,“Sumono: A representative modern bengali corpus.”View publication statsView publication statshttps://www.researchgate.net/publication/334331114</s>
|
<s>Automatic Formation, Termination & Correction of Assamese word using Predictive & Syntactic NLPAutomatic Formation, Termination & Correction of Assamese word using Predictive & Syntactic NLPManash Pratim BhuyanDept. of Information TechnologyGauhati UniversityGuwahati-14, Indiampratim250@gmail.comProf. Shikhar Kumar SarmaDept. of Information TechnologyGauhati UniversityGuwahati-14, Indiasks001@gmail.comAbstract—Automatic Formation, Termination & Correction ofAssamese words is a method in which a user will get the relevantsuggestions for the word which he/she is intended to write. In theformation and termination method user will type a letter and thesystem will display the most probable words related to that letter andif the user finds required word he/she will select the word, If therequired word does not appear in the suggestion list then the user hasto type more letters along with the previous letter to purify thesuggestion list. In the correction method, if a user type a word whichis not in the corpus then the system will show one warning for thatword which prevents the occurrence of the errors.Keywords—word prediction, n-gram, probability, corpusI. INTRODUCTIONAutomatic Formation, Termination & Correction methodhelps the user to write in a particular natural languageefficiently. This method is not only suggesting the word thatthe user intended to write; system is also predict the next wordrelated to the previous word. Currently, the natural languageuse in this method is Assamese but the method is not languagedependent from any point of view. The system can work on thecorpus of any natural language where the tokens are separatedby the space. This approach can help the differently abledpeople to write correctly and also increase the typing speed. Itis a technique which can prevent the occurrence of word error.The rest of the paper is organized as follows: Section II reflects about the related works, Section IIIdescribes the proposed model and also the procedure tocalculate the probabilities of the n-gram models, Section IVanalyses the results, Finally Section V concludes the the workand proposes the future work.II. RELATED WORKSSaharia et. al [1] have designed LuitPad Assamese writingsoftware which can help the user to write the word bysuggesting the related words from the part of a word and alsosuggest the letters which are mostly related to a particularletter.Haque et al.[2] has developed a method for word predictionin Bangla language using stochastic language models andperformance of the prediction system is evaluated by usingunigram, bigram, trigram, backoff and deleted interpolationmethod.Bickel et. al [3] derived a solution for sentence completionproblem using linear interpolation of N -gram models. Theyderived a k best Viterbi decoding algorithm with a confidence-based stopping criterion which conjectures the words that mostlikely succeed an initial fragment. M. Ghayoomi et. al [4] concluded that the best approach tohave appropriate predictions by combinig more linguisticknowledge such as syntactic, semantic, and pragmatic inaddition with the statistical knowledge at the same time to savemore keystrokes. . This approach might be closer to the 100%KSS (KeyStrokes Saving).D. C. Cavalieri [5] et. al accomplished the word predictiontask by using exponential interpolation to merge POS (Part-OfSpeech) based language model and a word based n-gramlanguage model.Spiccia [6] et. al said that word prediction generally relieson n-grams occurrence</s>
|
<s>statistics, which may have huge datastorage requirements and does not take into account the generalmeaning of the text. A method based on LSA (Latent SemanticAnalysis), to resolve these issues had been proposed. Anasymmetric Word-Word frequency matrix was employed toachieve higher scalability with large training datasets than theclassic Word-Document approach. They also proposed afunction for scoring candidate terms for the missing word in asentence. How this function approximates the probability ofoccurrence of a given candidate word had been shown.Experimentally found that the proposed method outperformsnon neural network language models.C. Aliprandi et. al [7] had evaluated FastType performanceenhancements for an Italian language, which is a inflectedlanguage. C. Aliprandi et. al abled to achieve word predictionKeystroke Saving up to 51% for a standard prediction list oflength 10. They enriched the Language Model with morpho-syntactic information and provided the prediction method withan on-the-fly Part-of-Speech word tagger and large lexicondictionaries.Q. Abbas et. al [8] claimed that their model help thehandicapped people to type fast just like normal human beingand also strengthens the normal ones further ahead. Theyachieved overall 65.28% of KS is comparable or better than thestate of the art resources in the domain of Urdu language andalso said that one was a positive contribution in Urdu languageprocessing. Their model has a quality of boosting with theincrease in length L of the text, which is quite good in case ofinflected languages like Urdu.Habib et. al [9] focused their research on modeling,training and recall techniques for automatic sentencecompletion using supervised machine learning technique basedon popular N-gram language model. N-gram based wordProceedings of the International Conference on Communication and Electronics Systems (ICCES 2018)IEEE Xplore Part Number:CFP18AWO-ART; ISBN:978-1-5386-4765-3978-1-5386-4765-3/18/$31.00 ©2018 IEEE 544prediction works well for English, but for Bangla language, itwas found more challenging to get very good, e.g. more than90% accuracy, performance as it depends on training corpus ofsize more than six hundred thousand sentences. Though duringthe several phases of experiments, the top three modelsTrigram, Backoff Linear Interpolation showed almost samelevel of accuracy which was above 70%, however in terms ofboth accuracy and failure rate, the linear interpolationoutperforms the other models.Zagler et. al[11] the FASTY (predictive typing,enhancing text input user interface developing, empoweringdisabled people) would assist language impaired persons towrite texts faster, with less physical/cognitive load and withbetter spelling and grammar. FASTY would be configurable fordifferent types of disabilities, different communication settingsand different European languages. It would allow easier accessto PC based office systems, to modern forms of ITcommunication and a faster usage of text to speechsynthesizers for voice communication.Troiano et. al[12] in their paper, they showed howprediction could be used to optimize the UI layout on mobiledevices, alongside the most common use in auto filling andsuggesting forthcoming entries. The efficiency of the proposedsystem depends on the quality of predictions and fulfillment ofthe expectations of the user. From a panel of users surveyshowed that efficiency of the proposed system can be fullyachieved.III. PROPOSED MODELA system is designed which can predict words after a letterentered by the user. The process will do corpus look up everytime after each input</s>
|
<s>by the user. Each time the predictorreturns top five or six word related to his/her input and if theuser wants to use any of the input he/she will select thepredicted word or he/she can skip the prediction if theprediction is unable to meet his/her expectation and he/she typethe next letter. Figure 1 shows the proposed working model.Figure 1: Word Formation, Termination and Correction modelIn the Figure 1 the labeled on the edges indicates the stepnumbers. There are around 5 cycles in the Figure for wordformation, termination and correction. If the expected word isformed in the first attempt the process will terminate and go forthe next word, otherwise to bring the expected word the userhas to enter more letters of that word. For error correction, if auser enters letters for which no prediction appears then warningmessage is displayed indicating the word to be an erroneousword, warning message request the user to delete the letters ofthe current word one by one until a prediction appears in thesuggestion list. The warning message can be ignored, becausethe system will allow a user to write the name of a person,place or any other entity which is valid but not in the corpus.A. N-gram Model:N-gram language model is a probabilistic model where theapproximate matching of next item is very high. We select N-gram based word prediction method because these are morestatistical approach and accurate to predict the next word in asentence and N-gram language model is a natural approach tothe construction of sentence completion in a system.Example: " ১৩ শততককৰ পৰক ঊননশ শততককৰ পৰথমছছকৱকনলছক অসমত আছহকমসকছল ৰকজতব ___”In the blank space the proper word is " ”কতৰতছল , which wecan guess by observing the previous sequence of the sentence,this is how the proposed model works.Types of N-gram are:-1. Unigram (1-gram) model2. Bigram (2-gram) model3. Trigram (3-gram) model4. Quadrigram (4-gram) model, etc.Sample sentence as example: " এছন ককচত পপতলছচ আদকলতৰ আছদশ বক পছৰকৱকনক কনকছহকৱকনক আচকমমক কগৰপকৰ কতৰব পকছৰ |”1. Unigram (1-gram) model: Uni-grams of the above sentence are:[এছন, ককচত, পপতলছচ, আদকলতৰ, আছদশ, বক, পছৰকৱকনক, কনকছহকৱকনক,আচকমমক, কগৰপকৰ, কতৰব, পকছৰ]In unigram model each token of the sentence is a uni-gram.There are total 12 unigrams2. Bigram (2-gram) model: Bi-grams of the above sentence are:[( এছন ককচত), ( ককচত পপতলছচ), ( পপতলছচ আদকলতৰ), ( আদকলতৰ আছদশ),( আছদশ বক),( বক পছৰকৱকনক), ( পছৰকৱকনক কনকছহকৱকনক), ( কনকছহকৱকনক আচকমমক),( আচকমমক কগৰপকৰ), ( কগৰপকৰ কতৰব), ( কতৰব পকছৰ)]In Bi-gram model group of two tokens of the sentence is abi gram.Proceedings of the International Conference on Communication and Electronics Systems (ICCES 2018)IEEE Xplore Part Number:CFP18AWO-ART; ISBN:978-1-5386-4765-3978-1-5386-4765-3/18/$31.00 ©2018 IEEE 545There are total 11 bigrams3. Trigram (3-gram) model: Tri-grams of the above sentence are:[( এছন ককচত পপতলছচ), ( ককচত পপতলছচ আদকলতৰ), ( পপতলছচ আদকলতৰ আছদশ),( আদকলতৰ আছদশ বক), ( আছদশ বক পছৰকৱকনক),( বক পছৰকৱকনক কনকছহকৱকনক),( পছৰকৱকনক কনকছহকৱকনক আচকমমক), ( কনকছহকৱকনক আচকমমক কগৰপকৰ), (আচকমমক কগৰপকৰ কতৰব), ( কগৰপকৰ কতৰব পকছৰ)]In Tri-gram model group of two tokens of the sentence is atri-gram. There are total 10 trigrams4. Quadrigram (4-gram) model: Quadrigrams of the above sentence are:[( এছন ককচত পপতলছচ আদকলতৰ),</s>
|
<s>( ককচত পপতলছচ আদকলতৰ আছদশ), (পপতলছচ আদকলতৰ আছদশ বক), ( আদকলতৰ আছদশ বক পছৰকৱকনক), ( আছদশ বক পছৰকৱকনককনকছহকৱকনক),( বক পছৰকৱকনক কনকছহকৱকনক আচকমমক), ( পছৰকৱকনক কনকছহকৱকনক আচকমমককগৰপকৰ), ( কনকছহকৱকনক আচকমমক কগৰপকৰ কতৰব), ( আচকমমক কগৰপকৰ কতৰব পকছৰ)]In Quadrigram model group of two tokens of the sentenceis a quadri-gram. There are total 9 quadrigramsB. Probability calculation in the various N-gram models:Sample sentence as example: " এছন ককচত পপতলছচ আদকলতৰ আছদশ বক পছৰকৱকনক কনকছহকৱকনক আচকমমক কগৰপকৰ কতৰব পকছৰ |”Corpus size = 51, 447Unigram (1-gram) model: P(এছন) = 8.289E-3, P( ককচত) = 0.667E-3, P(পপতলছচ) = 2.477E-3, P(আদকলতৰ) =2.477E-3, P(আছদশ) = 2.191E-3 ,P(বক )= 42.59E-3, P(পছৰকৱকনক) = 4E-3, P(কনকছহকৱকনক) = 1.143E-3,P(আচকমমক) = 0.953E-3,P(কগৰপকৰ) = 6.955E-3, P(কতৰব) =34.7E-3, P(পকছৰ) = 13.24E-3P(sentence= এছন ককচত পপতলছচ আদকলতৰ আছদশ বক পছৰকৱকনক কনকছহকৱকনক আচকমমক কগৰপকৰ কতৰব পকছৰ) = P(এছন) x P(ককচত) x P(পপতলছচ) x P(আদকলতৰ) x P(আছদশ) x P( বক )x P(পছৰকৱকনক) x P(কনকছহকৱকনক) x P(আচকমমক) x P(কগৰপকৰ) x P(কতৰব) xP(পকছৰ)= 8.289E-3 x 0.667E-3 x 2.477E-3 x 2.477E-3 x 2.191E-3x 42.59E-3 x 4.0E-3 x 1.143E-3 x 0.953E-3 x 0.6955E-3 x34.7E-3 x 13.24E-3= 4.4E-29 Bigram (2-gram) model: P(এছন) = 8.289E-3, P(ককচত | এছন) = 0.0133, P(পপতলছচ | ককচত)= 0.184, P( আদকলতৰ | পপতলছচ) = 0.056, P(আছদশ | আদকলতৰ) = 0.073,P(বক | আছদশ) = 0.106, P(পছৰকৱকনক | বক ) = 0.003, P(কনকছহকৱকনক |পছৰকৱকনক) = 0.051, P(আচকমমক | কনকছহকৱকনক) = 0.108, P(কগৰপকৰ |আচকমমক) = 0.15, P(কতৰব | কগৰপকৰ ) = 0.067 P(পকছৰ | কতৰব) = 0.016P(Sentence = এছন ককচত পপতলছচ আদকলতৰ আছদশ বক পছৰকৱকনক কনকছহকৱকনক আচকমমক কগৰপকৰ কতৰব পকছৰ)= P(এছন) x P( ককচত | এছন) x P( পপতলছচ | ককচত) x P( আদকলতৰ | পপতলছচ)x P( আছদশ | আদকলতৰ) x P( বক | আছদশ) x P( পছৰকৱকনক | বক ) x P(কনকছহকৱকনক| পছৰকৱকনক) x P( আচকমমক | কনকছহকৱকনক) x P( কগৰপকৰ | আচকমমক) x P( কতৰব | কগৰপকৰ ) x P( পকছৰ | কতৰব)= 8.289E-3 x 0.0133 x 0.184 x 0.056 x 0.073 x 0.106 x0.003 x 0.051 x 0.108 x 0.15 x 0.067 x 0.016= 2.4E-15Trigram (3-gram) model: Sentence = এছন ককচত পপতলছচ আদকলতৰ আছদশ বক পছৰকৱকনক কনকছহকৱকনক আচকমমক কগৰপকৰ কতৰব পকছৰP(এছন) = 8.289E-3, P( ককচত | এছন) = 0.0133, P( পপতলছচ | এছন ককচত ) = 1, P( আদকলতৰ | ককচত পপতলছচ) = 1,P( আছদশ | পপতলছচ আদকলতৰ) = 0.56,P( বক | আদকলতৰ আছদশ) = 0.61, P( পছৰকৱকনক | আছদশ বক) = 0.61P( কনকছহকৱকনক | বক পছৰকৱকনক) = 0.393, P( আচকমমক | পছৰকৱকনক কনকছহকৱকনক) = 0.28, P( কগৰপকৰ | কনকছহকৱকনক আচকমমক) = 1P( কতৰব | আচকমমক কগৰপকৰ) = 1, P( পকছৰ | কগৰপকৰ কতৰব) = 0.196P(Sentence = এছন ককচত পপতলছচ আদকলতৰ আছদশ বক পছৰকৱকনক কনকছহকৱকনক আচকমমক কগৰপকৰ কতৰব পকছৰ)=P(এছন) x P( ককচত | এছন) x P( পপতলছচ | এছন ককচত ) x P( আদকলতৰ | ককচত পপতলছচ) x P( আছদশ | পপতলছচ আদকলতৰ) x P( বক | আদকলতৰ আছদশ)x P( পছৰকৱকনক | আছদশ বক) x P( কনকছহকৱকনক | বকপছৰকৱকনক) x P( আচকমমক | পছৰকৱকনক কনকছহকৱকনক) x P( কগৰপকৰ | কনকছহকৱকনকআচকমমক) x P( কতৰব | আচকমমক কগৰপকৰ) x P( পকছৰ | কগৰপকৰ কতৰব)= 8.289E-3 x 0.0133 x 1 x 1 x 0.56</s>
|
<s>x 0.61 x 0.61 x 0.393 x0.28 x 1 x 1 x 0.196= 5E-7Quadrigram (4-gram) model: P(এছন) = 8.289E-3, P( ককচত | এছন) = 0.0133, P( পপতলছচ | এছন ককচত ) = 1, P( আদকলতৰ | এছন ককচত পপতলছচ) = 1, P( আছদশ | ককচত পপতলছচ আদকলতৰ) = 1, P( বক | পপতলছচ আদকলতৰ আছদশ) =1, P( পছৰকৱকনক | আদকলতৰ আছদশ বক) = 0.44, P( কনকছহকৱকনক | আছদশ বক পছৰকৱকনক) = 0.44, P( আচকমমক | বক পছৰকৱকনক কনকছহকৱকনক) = 0.22, P( কগৰপকৰ | পছৰকৱকনক কনকছহকৱকনক আচকমমক) = 1, P( কতৰব | কনকছহকৱকনক আচকমমক কগৰপকৰ) = 1, P( পকছৰ | আচকমমক কগৰপকৰ কতৰব) = 0.33P(Sentence = এছন ককচত পপতলছচ আদকলতৰ আছদশ বক পছৰকৱকনক কনকছহকৱকনক আচকমমক কগৰপকৰ কতৰব পকছৰ)Proceedings of the International Conference on Communication and Electronics Systems (ICCES 2018)IEEE Xplore Part Number:CFP18AWO-ART; ISBN:978-1-5386-4765-3978-1-5386-4765-3/18/$31.00 ©2018 IEEE 546= P(এছন) x P( ককচত | এছন) x P( পপতলছচ | এছন ককচত ) x P( আদকলতৰ | এছন ককচত পপতলছচ) x P( আছদশ | ককচত পপতলছচ আদকলতৰ) x P( বক | পপতলছচ আদকলতৰ আছদশ) x P( পছৰকৱকনক | আদকলতৰ আছদশ বক) xP( কনকছহকৱকনক | আছদশ বক পছৰকৱকনক) x P( আচকমমক | বক পছৰকৱকনককনকছহকৱকনক) x P( কগৰপকৰ | পছৰকৱকনক কনকছহকৱকনক আচকমমক) x P( কতৰব | কনকছহকৱকনক আচকমমক কগৰপকৰ) x P( পকছৰ | আচকমমক কগৰপকৰ কতৰব) = 8.289E-3 x 0.0133 x 1 x 1 x 1 x 1 x 0.44 x 0.44 x 0.22 x 1x 1 x 0.33= 1.5E-6 IV. RESULT AND DISCUSSIONSOne experiment is performed to compare the various n-grammodels so that in the future experiments the best n-gram modelcan be used to get the efficient result. For eight differentsentences the probabilities are calculated in the four n-grammodels which are shown in the Table 1. Table 1: Probability of the sentences in the various n-grammodelsThe average sentence probabilities are shown in the Figure 2.Figure 2: Avg. Sentence probability graph in various N-grammodelsFrom the above Figure 2 it is seen that the sentenceprobabilities in trigram and quadrigram models are not verymuch deviated from each other. The testing environment forthe experiment is shown in the Figure 3 Figure 3: Snapshot of the testing GUI V. CONCLUSION AND FUTURE WORKAutomatic word formation, termination and correctionsystem can help people to increase their writing speed bypredicting the relevant words. Among the various n-grammodels after comparing them from the experimental result it isfound that trigram and quadrigram models showing almostsimilar kind of result, but at the same time the corpus size fortrigram and quadrigram has a significant difference. So, fromthe above experiment we can conclude that for relativly fastercomputer a user may go for the quadrigram model and foreconomical computer user may confined in trigram or bigram.In addition the proposed model also contains syntactic levelprediction because the proposed model stores the fragmentedsentences which are syntactically correct. In the future linear interpolation model can incorporated tocheck the performance of the system. In addition, accuracy andthe efficiency of the proposed model will be evaluated with agroup of users and their feedback. Corpus size can be increasedto get more accurate results. To enhance the</s>
|
<s>n-grams, syntacticn-gram can be implemented to reduce the size of the n-gramcorpus.REFERENCES[1] N. Saharia and K. M. Konwar, “LuitPad: a fully unicode compatibleAssamese writing software,” in, Proceedings of the 2nd Workshop onAdvances in Text Input Methods (WTIM 2), Mumbai, India, 2012, pp.79-88. [2] M. Haque, M. T. Habib, M. M. Rahman, "Automated word prediction inBangla language using stochastic language model", InternationalJournal in fundation Computer Science& Technology (IJFCST), vol. 5,Nov. 2015.[3] S. Bickel, P. Haider, and T. Scheffer. 2005. Predicting sentences using n-gram language models. In Proceed-ings of the conference on Human0.00E+002.00E-054.00E-056.00E-058.00E-051.00E-041.20E-041.40E-041.60E-04Avg. ProbabilityProceedings of the International Conference on Communication and Electronics Systems (ICCES 2018)IEEE Xplore Part Number:CFP18AWO-ART; ISBN:978-1-5386-4765-3978-1-5386-4765-3/18/$31.00 ©2018 IEEE 547Language Technology and Empirical Methods in Natural LanguageProcessing, HLT ’05, pages 193–200, Stroudsburg, PA, USA.Association for Computational Linguistics.[4] M. Ghayoomi and S. Momtazi, “An overview on the existing languagemodels for prediction systems as writing assistant tools,” in Systems,Man and Cybernetics, 2009. SMC 2009. IEEE International Conferenceon, San Antonio, Texas, 11-14 October 2009, pp. 5083–5087, iSSN:1062-922X.[5] D. C. Cavalieri, S. E. Palazuelos-Cagigas, T. F. Bastos-Filho, and M.Sarcinelli-Filho, “Combination of language models for word prediction:an exponential approach,” IEEE/ACM Transactions on Audio, Speech,and Language Processing, vol. 24, no. 9, pp. 1481–1494, 2016.[6] C. Spiccia, A. Augello, G. Pilato and G. Vassallo, "A word predictionmethodology for automatic sentence completion," 2015 IEEEInternational Conference on Semantic Computing (ICSC), Anaheim,CA, USA, 2015, pp. 240-243.doi:10.1109/ICOSC.2015.7050813[7] C. Aliprandi, N. Carmignani, N. Deha, P. Mancarella, and M. Rubino,“Advances in nlp applied to word prediction,” 2008.[8] Q. Abbas, (2014), "A Stochastic Prediction Interface for Urdu",Intelligent Systems and Applications, Vol.7, No.1, pp 94-100 .[9] Habib, Md & AL-Mamun, Abdullah & Rahman, Md & Md TanvirSiddiquee, Shah & Ahmed, Farruk. (2018). An Exploratory Approach toFind a Novel Metric Based Optimum Language Model for AutomaticBangla Word Prediction. International Journal of Intelligent Systems andApplications. 2. 47-54. 10.5815/ijisa.2018.02.05. [10] Al-Mubaid, H.: A Learning - Classification based Appro WordPrediction. The International Arab Journal of Information Technology4(3), 264–271 (2007)[11] W.L. Zagler, C. Beck, G. Seisenbacher: "FASTY - Faster and easier textgeneration for disabled people"; Presentation: AAATE '03, Dublin; 08-30-2003 - 09-03-2003; in: "Assistive Technology - Shaping the Future";G. Craddock, L. McCo rmack, R. Reilly, H. Knops (ed.); IOS Press,Volume 11 (2003), 1 58603 373 5; 964 - 968. [12] Troiano, L., Birtolo, C., & Armenise, R. (2017). Modeling andpredicting the user next input by Bayesian reasoning. Soft Comput., 21,1583-1600.Proceedings of the International Conference on Communication and Electronics Systems (ICCES 2018)IEEE Xplore Part Number:CFP18AWO-ART; ISBN:978-1-5386-4765-3978-1-5386-4765-3/18/$31.00 ©2018 IEEE 548</s>
|
<s>B.Sc. in Computer Science and Engineering ThesisDevelopment of A Word Based Spell Checker for BanglaLanguageSubmitted byKowshik Bhowmik201114033Afsana Zarin Chowdhury201114049Sushmita Mondal201114058Supervised byDr. Hasan SarwarProfessor and Head of the Department, CSEUnited International University (UIU)Department of Computer Science and EngineeringMilitary Institute of Science and TechnologyDecember 2014CERTIFICATIONThis thesis paper titled “Development of A Word Based Spell Checker for Bangla Lan-guage”, submitted by the group as mentioned below has been accepted as satisfactory inpartial fulfillment of the requirements for the degree B.Sc. in Computer Science and Engi-neering in December 2014.Group Members:Kowshik BhowmikAfsana Zarin ChowdhurySushmita MondalSupervisor:Dr. Hasan SarwarProfessor and Head of the Department, CSEUnited International University (UIU)CANDIDATES’ DECLARATIONThis is to certify that the work presented in this thesis paper, titled, “Development of A WordBased Spell Checker for Bangla Language”, is the outcome of the investigation and researchcarried out by the following students under the supervision of Dr. Hasan Sarwar, Professorand Head of the Department, CSE, United International University(UIU).It is also declared that neither this thesis paper nor any part thereof has been submittedanywhere else for the award of any degree, diploma or other qualifications.Kowshik Bhowmik201114033Afsana Zarin Chowdhury201114049Sushmita Mondal201114058iiiACKNOWLEDGEMENTWe are thankful to Almighty Allah for his blessings for the successful completion of ourthesis. Our heartiest gratitude, profound indebtedness and deep respect go to our supervisor,Dr. Hasan Sarwar, Professor and Head of the Department, CSE, United International Uni-versity(UIU), for his constant supervision, affectionate guidance and great encouragementand motivation. His keen interest on the topic and valuable advices throughout the studywas of great help in completing thesis.We are especially grateful to the Department of Computer Science and Engineering (CSE)of Military Institute of Science and Technology (MIST) for providing their all out supportduring the thesis work. Also, we want to express our deepest gratitude to the reviewers,Lecturer Jahidul Arafat and Lecturer Wali Mohammad Abdullah for their valuable inputswhich helped us in revising the initial draft and preparing the final paper.Finally, we would like to thank our families and our course mates for their appreciableassistance, patience and suggestions during the course of our thesis.DhakaDecember 2014Kowshik BhowmikAfsana Zarin ChowdhurySushmita MondalABSTRACTWe present a word based Bangla spelling checker which improves the quality ofsuggestions with the help of the previous and next words of the misspelled words ina document. A spell checker, as we know, is a tool used for checking the spellingerrors. Also, it corrects those errors in the text or a document. Development of anyapplication for Bangla language is relatively complicated due to the complexities ofthe Bangla character set. Bangla alphabet consists of nearly 160 complex shaped com-pound character classes in addition to 50 basic character classes. Obviously, developinga spell checker application for Bangla language raises many new difficulties which donot have to be dealt with in case of a Latin based texts such as English. Most of thecurrently available Bangla spell checkers are based on correcting errors that has beencommitted on a character level. The complex rules for Bangla spelling and the com-plexities of Bangla character set demand different error detection & correction methodsfrom those used for other languages. Additionally, the</s>
|
<s>presence of similarly shapedcharacters, compound characters and the inflectional nature of the laguage present asignificant challenge in producing suggestions for a misspelled word when employingthe traditional methods. Considering the intricacies of the problem we have proposed,in this paper, the development of a word based spell checker for Bangla language. Ourresearch is aimed at the correction of misspelled words by considering their neighbour-ing words. For this purpose we have built a lexicon of unique structure. Based on thisspecially built lexicon the proposed system attempts to predict the misspelled words inan input text file. This paper also shows the performance and evaluation of our pro-posed solution. Finally, we conclude by describing the limitations of the system withpossible future improvements.TABLE OF CONTENTCERTIFICATION iiCANDIDATES’ DECLARATION iiiACKNOWLEDGEMENT ivABSTRACT 1List of Figures 6List of Tables 7List of Abbreviation 81 INTRODUCTION 91.1 Overview of Bangla spelling Checker . . . . . . . . . . . . . . . . . . . . 91.2 What is Bangla Spelling Checker? . . . . . . . . . . . . . . . . . . . . . . 91.3 Stemmer use in Bangla Spelling Checker . . . . . . . . . . . . . . . . . . 101.4 How we use the Bangla Spelling Checker . . . . . . . . . . . . . . . . . . 112 LITERATURE REVIEW 132.1 Natural Language Processing . . . . . . . . . . . . . . . . . . . . . . . . . 132.1.1 Major Tasks in NLP . . . . . . . . . . . . . . . . . . . . . . . . . 132.1.2 Levels of Natural Language Processing . . . . . . . . . . . . . . . 142.2 Noisy Optical Character Recognition and Bangla Language . . . . . . . . . 172.3 Degraded Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172.3.1 Types of Spelling Error . . . . . . . . . . . . . . . . . . . . . . . . 182.4 Post-Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192.5 Font Engine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192.6 Lexicon Filter . . . . . . . . . . . . . . . . . . . . . .</s>
|
<s>. . . . . . . . . . . 212.6.1 Structure of a Lexicon . . . . . . . . . . . . . . . . . . . . . . . . 212.6.2 Two Tree Implementation of Bangla Lexicon . . . . . . . . . . . . 222.6.3 Generation of Suspicious Words . . . . . . . . . . . . . . . . . . . 242.6.4 Word Partial Format Derivation . . . . . . . . . . . . . . . . . . . 252.6.5 Approximate String Matching Methods . . . . . . . . . . . . . . . 262.7 Near-Neighbor Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 322.7.1 Rogets Thesaurus . . . . . . . . . . . . . . . . . . . . . . . . . . . 322.7.2 Thesaural Connections . . . . . . . . . . . . . . . . . . . . . . . . 332.7.3 Error Correction of Bangla OCR Using Morphological Parsing . . . 342.8 Grammar Check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362.9 Pattern Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362.9.1 Bayes Decision Theory . . . . . . . . . . . . . . . . . . . . . . . . 372.10 Knowledge Driven Validation . . . . . . . . . . . . . . . . . . . . . . . . . 372.10.1 Boyer-Moore String Searching Algorithm . . . . . . . . . . . . . . 373 METHODOLOGY 393.1 Initial scenario of our project . . . . . . . . . . . . . . . . . . . . . . . . . 393.2 Lexicon Formation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403.3 Training The System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413.4 Formation of Root Word Lexicon . . . . . . . . . . . . . . . . . . . . . . . 423.5</s>
|
<s>Formation of Previous/Post Word . . . . . . . . . . . . . . . . . . . . . . . 433.6 Text File Implementation of Lexicon . . . . . . . . . . . . . . . . . . . . . 433.7 Database Implementation of Lexicon . . . . . . . . . . . . . . . . . . . . . 443.8 Detection of Faulty Word . . . . . . . . . . . . . . . . . . . . . . . . . . . 453.9 Correction of Faulty Word . . . . . . . . . . . . . . . . . . . . . . . . . . 474 RESULTS AND ANALYSIS 494.1 Experiment the results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495 CONCLUSION 565.1 Limitation of the system . . . . . . . . . . . . . . . . . . . . . . . . . . . 565.2 Future expansion of the system . . . . . . . . . . . . . . . . . . . . . . . . 57References 57LIST OF FIGURES2.1 Block Diagram of Present System . . . . . . . . . . . . . . . . . . . . . . 202.2 Digital Search Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222.3 Tree Structure in English Lexicon . . . . . . . . . . . . . . . . . . . . . . 232.4 Lexicon structure for error correction . . . . . . . . . . . . . . . . . . . . . 242.5 Cross correlation equation-1 . . . . . . . . . . . . . . . . . . . . . . . . . 262.6 Cross correlation equation-2 . . . . . . . . . . . . . . . . . . . . . . . . . 262.7 Misunderstand OCR word . . . . . . . . . . . . . . . . . . . . . . . . . . 272.8 Probability of match method . . . . . . . . . . . . . . . . . . . . . . . . . 272.9 Recurrence relation . . . . . . . . . . . . . . . .</s>
|
<s>. . . . . . . . . . . . . . 282.10 Training N-gram Model(a) [1] . . . . . . . . . . . . . . . . . . . . . . . . 292.11 Training N-gram Model (b) [1] . . . . . . . . . . . . . . . . . . . . . . . . 302.12 Minimum Edit Distance Equation . . . . . . . . . . . . . . . . . . . . . . 312.13 Bayes Probability Matching Equation-1 . . . . . . . . . . . . . . . . . . . 312.14 Bayes Probability Matching Equation-2 . . . . . . . . . . . . . . . . . . . 312.15 Some confusing character pairs in Bangla and their percentage of misclassi-fication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353.1 Basic Characters of Bangla Alphabet (Vowels) . . . . . . . . . . . . . . . . 393.2 Basic Characters of Bangla Alphabet (Consonants) . . . . . . . . . . . . . 403.3 Examples of mModifiers in Bangla . . . . . . . . . . . . . . . . . . . . . . 403.4 Examples of Compound Characters in Bangla . . . . . . . . . . . . . . . . 413.5 An Example of Digital Image of Bangla Text . . . . . . . . . . . . . . . . 413.6 Selected Connected Components . . . . . . . . . . . . . . . . . . . . . . . 423.7 Example of Search Results . . . . . . . . . . . . . . . . . . . . . . . . . . 423.8 First Step of Our Project . . . . . . . . . . . . . . . . . . . . . . . . . . . 433.9 Structure of Lexicon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433.10 Structure of Previous/Post Word . . . . . . . . . . . . . . . . . . . . . . . 433.11 Text File for Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . .</s>
|
<s>443.12 Processed Text File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443.13 Three Layer Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453.14 Database of Our Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463.15 MVC Architecture of Digital Lexicon . . . . . . . . . . . . . . . . . . . . 463.16 A Text File with Faulty Words . . . . . . . . . . . . . . . . . . . . . . . . 473.17 Results of Finding Faulty Words from The Text . . . . . . . . . . . . . . . 473.18 Corrected Words List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473.19 Framework of The Process [23] . . . . . . . . . . . . . . . . . . . . . . . . 484.1 Text Document about International Cricket . . . . . . . . . . . . . . . . . . 494.2 Text Document with Misspelled Words of Fiugre 4.1 . . . . . . . . . . . . 504.3 Corrected Text File by the System of Fiugre 4.2 . . . . . . . . . . . . . . . 504.4 Text Document about Politics . . . . . . . . . . . . . . . . . . . . . . . . . 514.5 Text Document with Misspelled Words of Fiugre 4.4 . . . . . . . . . . . . 514.6 Corrected Text File by The System of Fiugre 4.5 . . . . . . . . . . . . . . . 524.7 Text Document about International Affairs . . . . . . . . . . . . . . . . . . 524.8 Text Document with Misspelled Words of Fiugre 4.7 . . . . . . . . . . . . 534.9 Corrected Text File by The System of Fiugre 4.8 . . . . . . . . . . . . . . . 544.10 Text Document about International Sports (Football) . . . . . . . . . . . . 544.11 Text Document with Misspelled Words of Fiugre 4.10 . . . . . . . . . . . .</s>
|
<s>544.12 Corrected Text File by The System of Fiugre 4.11 . . . . . . . . . . . . . . 55LIST OF TABLES2.1 Rate of Error for Bangla . . . . . . . . . . . . . . . . . . . . . . . . . . . 192.2 Number of Characters Making Error . . . . . . . . . . . . . . . . . . . . . 192.3 Assumptions of Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . 254.1 Accuracy percentage of our system based on the cases . . . . . . . . . . . 53LIST OF ABBREVIATIONIR : Information RetievalIE : Information ExtractionNLG : Natural Language GeneratorNLU : Natural Language UnderstandingOCR : Optical Character RecognitionSR : Speech RecognitionCHAPTER 1INTRODUCTION1.1 Overview of Bangla spelling CheckerThere are more than 200 million native speakers of Bangla, the majority of whom live inBangladesh and in the Indian state of West Bengal. However, there has been very little re-search effort in the computerization of the Bangla language, leading to a dearth of Banglanatural language processing applications and tools. A Bangla spelling checker, one suchapplication, is an essential component of many of the common desktop applications such asword processors as well as the more exotic applications, such as a machine language transla-tor. One particular challenge facing the development of a usable spelling checker for Banglais the languages complex orthographic rules, in part a result of the large gap between thespelling and pronunciation of a word. One impact of this complexity can be seen in the ob-servation that two of the most common reasons for misspelling are (i) phonetic similarity ofBangla characters and (ii) the difference between grapheme representation and phonetic ut-terances. While there has been a sustained effort of late to develop a usable spelling checker,none of the solutions has been able to handle the full orthographic complexity of Bangla.We use the steps in the process of checking the spelling of a word when:(a) detect whether it is misspelled or not,(b) generate suggestions if it is misspelled, and(c) rank the suggestions so that the most likely candidate is placed first.1.2 What is Bangla Spelling Checker?It is desirable from a speller that it would search through a document for invalid or miss-spelled words. The searching area might be pre-selected by highlighting the portion of thedocument or the checking should run forward starting from the cursor position of the activedocument up to the end. Functionally, each word is identified on the run and the wordis matched with the database of the valid word-stock or word dictionary. If no match isfound the word is an invalid or miss-spelled one. In case of Bangla the development isrelatively tedious due to language complexities. Like other languages, Bangla is dividedinto vowels and consonants. Half-form of vowels exist when vowels</s>
|
<s>guide the sound ofconsonants. The place where the half-form would reside is not common, a few residesleft of the biasing consonants, called as left-biased, similarly there are right-biased, bottombiased even both-biased half forms of vowels. Compound form of consonants, half-formof compound consonants including plain consonants exist in character set at the applicationlevel. Though Bangla alphabet is case-insensitive, Bangla fonts do have about 190 differentcharacters of which 50 are in principal forms.Human beings often produce sentences that are ill-formed at various levels, including the ty-pographical/morphological, syntactic, semantic, and pragmatic levels. When we encounterill-formed sentences we usually understand their meanings and tolerate the errors. The in-tended meaning can often be inferred by using a variety of linguistic and real world knowl-edge.Shanon tested the interpretation of spoken ungrammatical sentencesin Hebrew, with one tothree errors in a sentence [2]. The errors included violation of agreement rules (e.g. num-ber,gender, and tense). Shanon found that humans preferred particular types of correction,for example in number-agreement violation, the verb is replaced rather than the noun, andin tense-agreement between a verb and an adverbial phrase, the verb is replaced rather thanthe adverbial phrase. He also found that unmarked forms such as verbs in infinitive formare replaced more than inflected forms. Shanon also observed a least effort principle: ifthere are two possible changes - for example a single change or a double change -, then thesimpler change is preferred.An experiment by Cooper and others explored humans’ tolerance of ill-formed sentences [3].Their results, based on spoken text which included missing words showed that listenerscould detect errors better when they paid attention to detecting errors than when they didn’tpay attention to this; listeners were highly accurate in reporting the presence of missingwords (96 % of 190 cases) when they were forewarned about the specific types of errorsthat might be encountered while listening. But the detection of missing words (34 % of80 cases) was quite poor when the words were easily predictable from context and whenlisteners were not forewarned about the specific types of errors.1.3 Stemmer use in Bangla Spelling CheckerStemming is a process by which a word is split into its stem and affix. Terms with commonstems tend to have similar meaning, which makes stemming an attractive option to increasethe performance of spelling checkers and other information retrieval applications. Anotheradvantage of stemming is that it can drastically reduce the dictionary sized used in vari-ous NLP applications, especially for highly inflected languages. The design of stemmers islanguage specific, and requires some to significant linguistic expertise in the language, aswell as the understanding of the needs for a spelling checker for that language [4] . Conse-quently, a stemmers performance and effectiveness in applications such as spelling checkervary across languages. A typical simple stemmer algorithm involves removing suffixes usinga list of frequent suffixes, while a more complex one would use morphological knowledgeto derive a stem from the words. The various stemming algorithms have been evaluated invarious applications from spelling checker to information retrieval, and the results show thatstemming appears to be more effective in such</s>
|
<s>applications for highly inflected languages.1.4 How we use the Bangla Spelling CheckerThe detection and correction of ill-formed sentences including a single syntactic error in-troduced by replacement of a valid word by a known/unknown word, insertion of an extraknown/unknown word, or by deletion of a word. Many systems have focussed on the recov-ery of ill-formed texts at the typographical level (Damerau, 1964; Damerau and Mays,1989),the morpho-syntactic level (Vosse, 1992), the syntactic level (Hayes and Mouradian,1981;Mellish, 1989), the semantic level (Fass and Wilks, 1983), and the pragmatic level(Granger,1983). Those systems described how to identify a localised error and how to repair it usinggrammar independent rules (Mellish, 1989), grammar specific rules (meta-rules)(Weischedeland Sondheimer, 1983), semantic preferences (Fass and Wilks, 1983), and heuristic ap-proaches (Damerau, 1964; Damerau and Mays, 1989; Vosse, 1992) [4].Weischedel and Sondheimer used the term one-stage error recovery for a system that canprocess both ill-formed and well-formed sentences with a single parser [4]. Our system isa two-stage error recovery parser based on chart parsing and consists of a well-formed sen-tence chart parser (WFSCP) and an ill-formed sentence chart parser (IFSCP) with a spellingcorrection algorithm based on dictionary lookup. The system invokes IFSCP only if the WF-SCP cannot recognise the input string as well-formed. When the IFSCP identifies a localerror to be substitution of a word, the spelling correction algorithm is invoked and providedwith syntactic information inferred by IFSCP, to correct the word which is believed to bemisspelt. This strategy has the advantage that the recovery process cannot affect the per-formance of a parser on well-formed sentences in terms of speed or complexity of parsingstrategies.This is an advantage in terms of processing efficiency: in terms of human processing, it cor-responds to proposing that human processing has efficiency as a goal, and that consequentlyspecial methods that are not normally used are brought to bear on ill-formed sentences.Weischedel and Sondheimer would appear to be positing that error-repair methods are ini-tiated exactly when the error is first detected - our approach corresponds to saying that itis reasonable to defer correction until more is known. This could be the end of the cur-rent phrase, or the current sentence, or perhaps just until a couple of further words havebeen seen. In the WFSCP/IFSCP system, WFSCP processing of the current sentence iscompleted before error correction is begun, but this is largely a matter of algorithmic conve-nience, and the system could be adapted so that it would try correction after the end of thecurrent noun phrase, if an error was detected while a noun phrase was being processed (forexample, because an unknown word was found).The Bangla speeling checker have also many alternatives. There are two ranking strategies:syntactic level ranking and lexical level ranking. The syntactic rank is a penalty score de-rived from the importance of the repaired constituent in the local tree (e.g. head constituentsare more important than modifiers) and the type of error correction (e.g. substitution ¡ addi-tion = deletion). The lexical rank depends on the distance between two letters on a keyboard.For example, held would be considered</s>
|
<s>a more plausible replacement than hold for the non-word hwld, because e is closer to w on a standard keyboard than is o. Among humans, thistype of correction strategy would of course only be available to those who were aware ofkeyboard layout. Vosse (1992) attempted to correct an ill-formed sentence at the morpho-syntactic level. If there was a misspelt word, then his spelling corrector suggested the bestcorrection. With this best correction his syntactic parser continued. Our system, on theother hand, employs a strictly top-down approach to a misspelt word. After WFSCP hasfinished trying to parse an ill-formed sentence containing a misspelt word, IFSCP is invokedand provided with the phrase structure, and misspelt word (if found by WFSCP). If IFSCPsuggests that the error type is substitution of an unknown word, or (more difficult) a knownword, then the lexical category inferred by IFSCP is used to assist in the spelling correction.CHAPTER 2LITERATURE REVIEW2.1 Natural Language ProcessingNatural Language Processing (NLP) is the computerized approach to analyzing text that isbased on both a set of theories and a set of technologies [5]. And, being a very active areaof research and development, there is not a single agreed-upon definition that would satisfyeveryone, but there are some aspects, which would be part of any knowledgeable person’sdefinition.2.1.1 Major Tasks in NLPThe following is a list of some of the most commonly researched tasks in NLP. Note thatsome of these tasks have direct real-world applications, while others more commonly serveas sub-tasks that are used to aid in solving larger tasks. What distinguishes these tasks fromother potential and actual NLP tasks is not only the volume of research devoted to thembut the fact that for each one there is typically a well-defined problem setting, a standardmetric for evaluating the task, standard corpora on which the task can be evaluated, andcompetitions devoted to the specific task.Information Retrieval is concerned with storing, searching and retrieving information. Itis a separate field within computer science (closer to databases), but IR relies on some NLPmethods (for example, stemming) [6] . Some current research and applications seek tobridge the gap between IR and NLP. Automated information retrieval systems are used toreduce what has been called information overload. Many universities and public librariesuse IR systems to provide access to books, journals and other documents.Information extraction (IE) is the task of automatically extracting structured informationfrom unstructured and/or semi-structured documents [7] . In most of the cases this activ-ity concerns processing human language texts by means of Natural Language Processing(NLP). Recent activities in multimedia document processing like automatic annotation andcontent extraction out of images / audio / video could be seen as information extraction.Natural Language Generation (NLG) is the natural language processing task of generat-ing natural language from a machine representation system such as a knowledge base or alogical form [8] . Psycholinguists prefer the term language production when such formalrepresentations are interpreted as models for mental representations. It could be said anNLG system is like a translator that converts a computer based representation into a naturallanguage representation.</s>
|
<s>However, the methods to produce the final language are differentfrom those of a compiler due to the inherent expressibility of natural languages.Natural Language Understanding converts chunks of text into more formal representa-tions such as first-order logic structures that are easier for computer programs to manipulate.Natural language understanding involves the identification of the intended semantic from themultiple possible semantics which can be derived from a natural language expression whichusually takes the form of organized notations of natural languages concepts. Introductionand creation of language met model and ontology are efficient however empirical solutions.An explicit formalization of natural languages semantics without confusions with implicitassumptions such as closed world assumption (CWA) vs. open world assumption, or subjec-tive Yes/No vs. objective True/False is expected for the construction of a basis of semanticsformalization.Speech recognition (SR) is the translation of spoken words into text. It is also known as’automatic speech recognition’ (ASR), ”computer speech recognition”, or just ’speech totext’ (STT). Some SR systems use ’speaker-independent speech recognition’ while othersuse ”training” where an individual speaker reads sections of text into the SR system. Thesesystems analyze the person’s specific voice and use it to fine-tune the recognition of thatperson’s speech, resulting in more accurate transcription. Systems that do not use trainingare called ’speaker-independent’ systems. Systems that use training are called ’speaker-dependent’ systems [9] .Optical Character Recognition (OCR) is generally an ’off-line’ process, which analyzesa static document. Handwriting movement analysis can be used as input to handwritingrecognition [10] . Instead of merely using the shapes of glyphs and words, this techniqueis able to capture motions, such as the order in which segments are drawn, the direction,and the pattern of putting the pen down and lifting it. This additional information can makethe end-to-end process more accurate. This technology is also known as ”on-line charac-ter recognition”, ”dynamic character recognition”, ”real-time character recognition”, and”intelligent character recognition”.2.1.2 Levels of Natural Language ProcessingThe most explanatory method for presenting what actually happens within a Natural Lan-guage Processing system is by means of the ’levels of language’ approach. This is alsoreferred to as the synchronous model of language and is distinguished from the earlier se-quential model, which hypothesizes that the levels of human language processing followone another in a strictly sequential manner. Psycholinguistic research suggests that languageprocessing is much more dynamic, as the levels can interact in a variety of orders. Intro-spection reveals that we frequently use information we gain from what is typically thoughtof as a higher level of processing to assist in a lower level of analysis. For example, thepragmatic knowledge that the document you are reading is about biology will be used whena particular word that has several possible senses (or meanings) is encountered, and the wordwill be interpreted as having the biology sense.Of necessity, the following description of levels will be presented sequentially. The key pointhere is that meaning is conveyed by each and every level of language and that since humanshave been shown to use all levels of language to gain understanding, the more capable anNLP system is, the more levels of</s>
|
<s>language it will utilize.Phonology This level deals with the interpretation of speech sounds within and acrosswords. There are, in fact, three types of rules used in phonological analysis: 1) phoneticrules for sounds within words; 2) phonemic rules for variations of pronunciation whenwords are spoken together, and; 3) prosodic rules for fluctuation in stress and intonationacross a sentence. In an NLP system that accepts spoken input, the sound waves are ana-lyzed and encoded into a digitized signal for interpretation by various rules or by comparisonto the particular language model being utilized.Morphology This level deals with the nature of components of words, which are composedof morphemes the smallest units of meaning. For example, the word preregistration can bemorphologically analyzed into three separate morphemes: the prefix pre, the root registra-tion, and the suffix. Since the meaning of each morpheme remains the same across words,humans can break down an unknown word into its constituent morphemes in order to un-derstand its meaning. Similarly, an NLP system can recognize the meaning conveyed byeach morpheme in order to gain and represent meaning. For example, adding the suffixedto a verb, conveys that the action of the verb took place in the past. This is a key piece ofmeaning, and in fact, is frequently only evidenced in a text by the use of the morpheme.Lexical At this level, humans, as well as NLP systems, interpret the meaning of individualwords. Several types of processing contribute to word-level understanding the first of thesebeing assignment of a single part-of-speech tag to each word. In this processing, words thatcan function as more than one part-of-speech are assigned the most probable parts of speechtag based on the context in which they occur. Additionally at the lexical level, those wordsthat have only one possible sense or meaning can be replaced by a semantic representationof that meaning. The nature of the representation varies according to the semantic theoryutilized in the NLP system .The lexical level may require a lexicon, and the particular approach taken by an NLP systemwill determine whether a lexicon will be utilized, as well as the nature and extent of infor-mation that is encoded in the lexicon. Lexicons may be quite simple, with only the wordsand their part(s)-of-speech, or may be increasingly complex and contain information on thesemantic class of the word, what arguments it takes, and the semantic limitations on thesearguments, definitions of the sense(s) in the semantic representation utilized in the particularsystem, and even the semantic field in which each sense of a polysemous word is used.Syntactic This level focuses on analyzing the words in a sentence so as to uncover the gram-matical structure of the sentence. This requires both a grammar and a parser. The outputof this level of processing is a (possibly de-linearized) representation of the sentence thatreveals the structural dependency relationships between the words. There are various gram-mars that can be utilized, and which will, in turn, impact the choice of a parser. Not all NLPapplications require a full parse of sentences, therefore the remaining challenges in parsingof</s>
|
<s>prepositional phrase attachment and conjunction scoping no longer stymie those applica-tions for which phrasal and clausal dependencies are sufficient. Syntax conveys meaning inmost languages because order and dependency contribute to meaning. For example the twosentences: ’The dog chased the cat.’ and ’The cat chased the dog.’ differ only in terms ofsyntax, yet convey quite different meanings.Semantics This is the level at which most people think meaning is determined, however, aswe can see in the above defining of the levels, it is all the levels that contribute to meaning.Semantic processing determines the possible meanings of a sentence by focusing on the in-teractions among word-level meanings in the sentence. This level of processing can includethe semantic disambiguation of words with multiple senses; in an analogous way to howsyntactic disambiguation of words that can function as multiple parts-of-speech is accom-plished at the syntactic level. Semantic disambiguation permits one and only one sense ofpolysemous words to be selected and included in the semantic representation of the sentence.For example, amongst other meanings, ’file’ as a noun can mean either a folder for storingpapers, or a tool to shape one’s fingernails, or a line of individuals in a queue. If informationfrom the rest of the sentence were required for the disambiguation, the semantic, not thelexical level, would do the disambiguation. A wide range of methods can be implementedto accomplish the disambiguation, some which require information as to the frequency withwhich each sense occurs in a particular corpus of interest, or in general usage, some whichrequire consideration of the local context, and others which utilize pragmatic knowledge ofthe domain of the document.Discourse While syntax and semantics work with sentence-length units, the discourse levelof NLP works with units of text longer than a sentence. That is, it does not interpretmulti sentence texts as just concatenated sentences, each of which can be interpreted singly.Rather, discourse focuses on the properties of the text as a whole that convey meaning bymaking connections between component sentences. Several types of discourse processingcan occur at this level, two of the most common being anaphora resolution and discourse/-text structure recognition. Discourse/text structure recognition determines the functions ofsentences in the text, which, in turn, adds to the meaningful representation of the text.Pragmatic This level is concerned with the purposeful use of language in situations andutilizes context over and above the contents of the text for understanding The goal is toexplain how extra meaning is read into texts without actually being encoded in them. Thisrequires much world knowledge, including the understanding of intentions, plans, and goals.Some NLP applications may utilize knowledge bases and inference modules.2.2 Noisy Optical Character Recognition and Bangla LanguageFor some decades there has been massive, expensive, ongoing institutional digitization oftextual resources such as books, magazines, newspaper articles, documents, pamphlets andephemera from cultural archives. In addition, declassified government documents are beingreleased into the public domain, and many organizations and individuals are converting ex-isting document images into machine readable text via OCR [11] . To make this digitizationaccurate and efficient, a great deal of research work</s>
|
<s>has been done for languages based onthe Latin script.Although Bangla is one of the most widely spoken languages (over 200 million people useBangla as their medium of Communication) of the world, research is acute in recognition ofBangla characters. Under this context, an effort has been taken globally to computerize theBangla language. Compared to English and other language scripts, one of the major stum-bling blocks in Optical Character Recognition (OCR) of Bangla script is the large numberof complex shaped character classes of Bangla alphabet. In addition to 50 basic characterclasses, there are nearly 160 complex shaped compound character classes in Bangla alpha-bet. Dealing with such a large variety of characters with a suitably designed feature setis a challenging problem. Uncertainty and imprecision is inherent in handwritten script.Moreover, such a large variety of complex shaped characters, some of which have closeresemblance, makes the problem of OCR of Bangla characters more difficult.2.3 Degraded TextThe biggest problem associated with the retrieval of OCR text from scanned data is theunavoidable character corruption that results even from the best of the OCR systems. Veryfew researches have dealt with this problem. In case of good quality input, OCR errorshave little to no effect at all. But the effectiveness is reduced to great extent in case of poorscanning quality. But it is not always possible to supply good quality scans as input to theOCR system [12] . As OCR often deals with the digitization process of old documents orliterature, scan quality, font type and font size of the input will not always be compatiblewith that of any standard OCR system. As a result, recognition errors can occur duringthe preprocessing stages of the OCR system. Due to recognition errors in the characterrecognition process, unpredictable distortion occurs which is a major problem with retrievalof OCR.Users have no idea of distortion of such sorts. As a consequence, their query hardlymatch the terms that are actually stored in the OCR text. This the efficiency of retreivalis greatly reduced. This case is observed more often in case of low quality inputs. Faulttolerant retrieval strategy which is based on automatic keyword extraction & fuzzy matchingis proposed with a view to reducing the losses derived from retrieving noisy OCR text.2.3.1 Types of Spelling ErrorTechniques were designed on the basis of spelling errors trends these are also called errorpatterns. Therefore many studies were performed to analyze the types and the trends ofspelling errors. The most notable among these are the studies performed by Damerau. Ac-cording to these studies spelling errors are generally divided into two types Typographicerrors and Cognitive errors [13] .A. Typographic errors These errors are occurring when the correct spelling of the word isknown but the word is mistyped by mistake. These errors are mostly related to the keyboardand therefore do not follow any linguistic criteria. A study by Damerau shows that 80% ofthe typographic errors fall into one of the following four categories.1. Single letter insertion; e.g. typing acress for cress.2. Single letter deletion, e.g. typing acress for ctress.3. Single letter substitution, e.g.</s>
|
<s>typing acress for across.4. Transposition of two adjacent letters, e.g. typing acress for caress.The errors produced by any one of the above editing operations are also called single-errors.The rate of errors for Bangla is shown in [14] Table 2.1.The number of characters making the error is shown in Table 2.2B. Cognitive errors These are errors occurring when the correct spellings of the word arenot known. In the case of cognitive errors, the pronunciation of misspelled word is the sameor similar to the pronunciation of the intended correct word. (e.g. receive -> recieve, abyss-> abiss ).Table 2.1: Rate of Error for BanglaTypes of Error PercentageSubstitution / Replacement 66.32Deletion Error 21.88Insertion Error 6.53Swap / Transposition Error 5.27Table 2.2: Number of Characters Making ErrorError Zone Length (in no. of characters) % of words1 41.362 32.943 16.584 7.15 1.786 0.242.4 Post-ProcessingFor a complex language like Bengali even the most complex pre-processing and characterrecognition algorithms combined together is not sometimes enough to produce a satisfac-tory result. The use of compound words in the language poses a major problem for therecognition process. Also most application of Bengali OCR deals with the digitization ofold documents whose font type and sizes vary drastically.These factors greatly affects theaccuracy and the efficiency of Bengali OCR systems [15] .But the principal concern of the current OCR system is to achieve maximum accuracy.To attain the desired level of accuracy a post-processing recognition engine is added to thesystem. This post-processing recognition engine consists of filters such as a font engine, alexicon filter, a near neighbor analysis filter. The post-processing layer is the integration ofdifferent analyzers as shown in Figure 2.1. By combining the post-processing layer or inother words, the font engine, lexicon filter, near neighbor analyzer with further analyzingfilters such as pattern matching filter, grammar checker and the knowledge driven analyzerforms an efficient algorithm that ensures a much higher accuracy of the output.2.5 Font EngineAfter preprocessing the digitized image the data is sent for matching to the font enginefilter. The filter then maps its input to its database. A probabilistic model is used for thecomparison between the input data and the database of the fonts.The engine matches eachof the fonts derived from the digitized image to the font database that is contained in theengine. Then, using the probabilistic model, a threshold value is calculated [15] . Theimage that gives the highest threshold value is selected. Then the selected value is mappedFigure 2.1: Block Diagram of Present Systemto the font. Brute force algorithm is used to do the mapping. Finally, for retrieval purposesin the future, the successfully analyzed and mapped fonts are stored to the database.2.6 Lexicon FilterThe accuracy of the out of the OCR system can be increased by constraining the out usinga lexicon. Lexicon, as we know, is a list of allowed words in a particular document. Forexample all the words in the Bengali language is allowed to occur in a document. But in amore practical OCR system which is designed for a specific field. In that case, the size ofthe lexicon</s>
|
<s>naturally narrows down. This method poses some problems if the word in thedocument is not in the lexicon of that system. In that case, periodic updating of the lexiconis required [15].Practical documents are more likely to contain words or strings made up of characters ratherthan characters that are isolated [15] . The efficiency of a lexicon-driven string recognitioncan be improved if we match the string classes rather than matching them one by one.Incase of string classes matching, the strings with low score partial match are discarded. Thesearch strategy is designed in such a way that the lexicon is organized in the form of a tree ora graph. This ensures that common substrings and the string image are matched only once.2.6.1 Structure of a LexiconTrees can be use as the implementation of a lexicon data type. This tree stores a largecollection of words and makes entry and lookup fast. The structure developed by EdwardFredkin in 1960, is called a trie taken from the central letters of ’retieval’. The trie basedlexicon structure makes it possible to determine whether a word is in the dictionary morequickly than you can using a hash table, and it offer natural support for confirming thepresence of prefixes in a way that hash tables cant. On one level, a trie is simply a tree inwhich each node branches in as many as 256 different directions, one for each position inthe ASCII table.Digital search trees store strings character by character.Figure 2.2 is a tree that represents thesame set of 12 words; each input word is shown beneath the node that represents it. (Two-letter words lead to prettier pictures; all structures that we’ll see can store variable-lengthwords.) In a tree representing words of lowercase letters, each node has 26-way branching(though most branches are empty, and not shown in Figure 2.2). Searches are very fast: Asearch for ”is” starts at the root, takes the ”i” branch, then the ”s” branch, and ends at thedesired node. At every node, we access an array element (one of 26), test for null, and take aFigure 2.2: Digital Search Treebranch. Unfortunately, search tries have exorbitant space requirements: Nodes with 26-waybranching typically occupy 104 bytes, and 256-way nodes consume a kilobyte. Eight nodesrepresenting the 34,000-character Unicode Standard would together require more than amegabyte! [16]When using a tree to back a lexicon, the words are implicitly represented by the tree itself,each word represented as a chain of links moving downward from the root [15] . The root ofthe tree corresponds to the empty string, and each successive level of the tree corresponds tothe prefix of the entire word list formed by appending another letter to the string representedby its Parent. The A link descending from the root leads to the sub-tree containing all of thewords beginning with A, the B link from that node leads to the sub-tree containing all of thewords beginning with AB, etc. Each node stores a true whenever the substring that ends atthat particular point is a legitimate word. If we pretend that</s>
|
<s>the English alphabet only has 7letters (let’s say A through G) and we further assume that the English language only has fivewordsbe, bed, cab, cage, and cagedthen the underlying tree structure of the English Lexiconwould look like figure 2.something.2.6.2 Two Tree Implementation of Bangla LexiconThere are two lexicons in the system under discussion : one for the root words and the otherfor the suffixes.The root word lexicon is organized in alphabetical order as follows. It contains all theroot words as well as morphologically deformed variants of the root words so that simpleconcatenation with suffixes produces valid words. Some of the morphologically deformedFigure 2.3: Tree Structure in English Lexiconvariants cannot be used in isolation as root words in the text. This information along withparts of speech and other grammatical information are tagged with each entry of the rootlexicon in the form of a number.Root words of length up to 3 characters as well as thedistinct strings of root words of length greater than 3 are represented in a tree structure. Ateach leaf node of the trie an address as well as a boolean flag is maintained. The address of aleaf node Lk (where Lk denotes the kth node of Lth level) of the trie points to the first wordof the main lexicon, the first three characters of which are the same as those we encounterby traversing the tree starting from the root word and continuing up to that leaf node Lk.Therefore, if a string is of length four or more (characters), then we can obtain the addressfrom the trie by using the first three characters and then sequentially searching the words inmain lexicon.The lexicon structure of error correction is shown in Figure 2.4. The booleanflag of a leaf node Lk indicates whether the string of the characters obtained by traversingthe tree, starting from the root node and continuing up to Lk, is a valid root word or not.Thus, by means of boolean flags, we can detect the validity of test strings whose lengths areless than or equal to 3, without searching in the root word lexicon.The suffix lexicon is maintained in another tree structure. Each node in the tree representseither a valid suffix or an intermediate character of a valid suffix. A node representing avalid suffix is associated with a marker needed for a grammatical agreement test [17] .If onetraverses from the root of the suffix lexicon trie to a node having a valid suffix marker, thenthe encountered character sequence represents a valid suffix string in reverse order (i.e., lastcharacter first). This is so because during suffix search, we start from the last character of astring and move leftwards.2.6.3 Generation of Suspicious WordsIn the OCR validation module used by the System for Preservation of Electronic Recourses(SPER) which was developed at the U.S. National Library of Medicine, the suspicious wordsin an OCR output of scanned document are detected and corrected. To make correctionsof the suspicious words, first it derives the partial format of each suspicious word. Thenthe module retrieves candidate words by</s>
|
<s>searching the lexicon using partial-match search.Then the suspicious words and the words derived by the search are compared taking intoconsideration the joint probabilities of N-gram and OCR edit.transformation correspondingto the candidates. The derivation of partial format which is based on the error analysis ofthe OCR engine effectively generates the desired candidate words from the lexicon. Thislexicon is represented by ternary search trees.Figure 2.4: Lexicon structure for error correctionTable 2.3: Assumptions of ConversionCharacter Conversions Examples1 ce* CHARGECHABGE2 ce1*e2 informationinforrnation3 ce1e2* AdulterationAclulteration4 ce1*e2* SulphateSiilphate5 c1c2e* libellantlibeUantThe FineReader OCR Engine used by SPER assigns a confidence value to each recognizedcharacter by a binary feature isSuspicious. In addition, OCR words that are not found inthe built-in dictionaries may also be indicated by a binary feature isFromDict. So using thisOCR engine we have two types of words that are considered to be suspicious. The first kindis the words that contains one or several characters whose confidence value is below thethreshold value allowed by that engine [18] . The second type of suspicious words includethe words that are not found in context-based lexicon of the scanned documents.2.6.4 Word Partial Format DerivationA partial word is a word that contains one or several wild cards . where a wild card signifies acharacter which is unrecognized. Word partial format derivation is the process of estimatingthe partial format of a word that is produced as an output of the OCR engine by combiningthe position of the characters with low confidence value within the word. Then we use thisword partial format to generate candidate words from the lexicon by approximate stringmatching.Errors in a word produced by the OCR engine can be considered as transformation errorsof one or more characters. In this particular application three kinds of transformation errorsare taken into account; shown in Table 2.3. Firstly, there is the error where a character istransformed into another character. Secondly, error occurs then a single character is trans-formed into two characters. Finally, there is the case where two characters are transformedinto one character. Considering the different positions of a character that has gone througha transformation error, or in other word the character that has lower confidence value thanthe defined threshold, we have five kinds of transformation errors. These five errors are sig-nified in Table 2.3 where ’c’ denotes a character which is correct and ’e’ denotes the errorcharacter. Low confidence characters are denoted by ’*’ .The process of deriving partial format is aimed at estimating the partial formats by sequen-tially replacing characters with low confidence value and their neighbors with proper numberof wild cards. Different replacements results in different transformation rules. So a suspi-cious word with t low confidence characters could generate up to 4t partial formats. Themaximum value occurs when every one of the low confidence characters appear inside theword. Also it has to be taken into consideration that none of the low confidence characterinside the word is either next to another low confidence character or share the same neighborwith another low confidence character.2.6.5 Approximate String Matching MethodsCross correlation matching method The</s>
|
<s>cross-correlation function is commonly used inimage and signal processing where an unknown signal is searched for a known featureor shape, and is sometimes described as a sliding dot-product. In this project, the cross-correlation function was modified to operate on words and letters instead of quantitativesignals. The function compares words a (of length m) and b (of length n ¿= m) and producesa vector W of length l = m + n - 1 with elements Wi (where i = 0 ... l - 1) defined by theequation below:Figure 2.5: Cross correlation equation-1whereFigure 2.6: Cross correlation equation-2This function compares the two words by aligning them against each other and examiningthe corresponding pairs of letters. The element W0 is defined as the alignment positionwhere the last letter of word b aligns with the first letter of word a. Example calculationsusing the true word Biology and the mistranslated OCR word 1Biology are shown in Figure2.7 below:Figure 2.7: Misunderstand OCR wordApproximate String Matching by Ternary Search TreeThe Context based lexicon that is used by the OCR engines is represented by Ternary searchtrees [18] . The ternary search trees are more efficient than hashing and other search methodsas they offer efficient algorithms for sorting and searching strings.Each node of the TernarySearch Tree contains a split character and three pointers to its low, high and equal child nodes(based on the split character). During the sorting or searching of a word, the characters in thequery word are compared one by one with the split characters of nodes in the search path.This helps in deciding whether the next direction of search among the possible three. Thishelps find the optimum search path. New nodes are entered at the finishing of the search pathwhen sorting operations are carried out. In the case of searching: a boolean value true willbe returned when the search process ends in comparing the last character of the word andthe split character of a leaf node, and finds them be equal, otherwise false will be returned.Adjusting the splitting rules can help achieve more sophisticated methods of searching suchas partial-match search.Figure 2.8: Probability of match methodFigure 2.8 something is a balanced ternary search tree for the same set of 12 words. The lowand high pointers are shown as solid lines, while equal pointers are shown as dashed lines.Each input word is shown beneath its terminal node [16] . A search for the word ”is” startsat the root, proceeds down the equal child to the node with value ”s,” and stops there aftertwo comparisons. A search for ”ax” makes three comparisons to the first letter (”a”) andtwo comparisons to the second letter (”x”) before reporting that the word is not in the tree.Probability of match method The probability of match method is the edit distance methodwith the addition of the probabilistic substitution matrix. The edit distance algorithm wasimplemented slightly differently with this method in order to directly calculate the probabil-ity that a true word a would be mistranslated into the OCR word b. In order to calculate thisresult,</s>
|
<s>the probability of the OCR making each edit was looked up in the substitution matrix,and all of the letter-wise probabilities were multiplied together to form the word translationprobability. This implementation also required modifying the algorithm to find the maxi-mum probability of translation rather than the minimum number of edits. The recurrencerelation became, thenFigure 2.9: Recurrence relationwhere PI(x)andPD(x)are the frequency of insertion and deletion of the character x, andPM(x,y) the frequency of substituting x for y. [approximate]Similarity Keys A key is assigned to each dictionary word an only dictionary keys are com-pared with the key computed for the non word [13] .The word for which the keys computedfor the non word .The word for which the keys are most similar are selected as suggestion.such an approach is speed effective as only the words with similar keys have to be processedWith a good transformation algorithm this method can handle keyboard errors.N-gram Model FormulationIn this formulation technique, N-grams are n-letter sub sequences of words or strings. Heren usually is one, two or three. One letter ngrams are referred to as unigrams or monograms;Figure 2.10: Training N-gram Model(a) [1]two letter n-grams are referred to as bi-grams and three letter n-grams as trigrams. In general,n-gram detection technique work by examining each n-gram is an input string and lookingit up in a pre-compiled table of n-gram statistics to as certain either its existence or itsfrequency of words or strings that are found to contain nonexistence or highly infrequentn-grams are identified as either misspellings.Letter n-grams, including tri-grams, bi-grams and unigrams have been used in a variety ofways in text recognition and spelling correction techniques. They have been used by OCRcorrectors to capture the lexical syntax of a dictionary and to suggest legal corrections [19] .Edit Distance Matching Method The edit distance algorithm is a well-known recursivealgorithm which supplies the optimum alignment between two ordered strings and deter-mines the global minimum number of edits needed to transform one string into the other. Inthis algorithm the edits can be insertions, deletions, or substitutions. The algorithm is oftenimplemented using a dynamic programming approach that calculates the number of edits Dbetween every possible left-sided substring of each of the two words a and b. D(ai,bj), forexample, is the edit distance between the first i letters of the word a and the first j letters ofthe word b. The dynamic programming calculation is recursive, withwhere CI, CM, and CD, are the costs of insertion, match/substitution, and deletion respec-tively. In the simplest case, the costs of insertion, deletion, and substitution are all unit costs,and the cost of a match is zero. That is, CI(x) = CD(x) = 1 for all x, and CM(x,y) = 0 ifx = y and 1 otherwise. If the two words a and b have lengths m and n, then D(am,bn) willcontain the minimum edit distance needed to transform a into b. In our use of this function,normalizing by word length was desirable, so the final matching score was determined byD/m, giving the edit distance as a percentage of</s>
|
<s>the true word length.The method can be extended to use different costs for different edits, so that CM(x,y) maydepend on the frequency of the OCR substitutions of x for y, CI(x) on the frequency ofinserting character x, and CD(x) on the frequency of deleting character y, in a similar wayto that described for the cross correlation method above. This is commonly implementedusing a substitution matrix as described above. For this study, the edit distance algorithmwas implemented first without and then with a substitution matrix. The implementation witha substitution matrix is described in the next section.Bayesian probability matching method The Bayesian probability matching method fur-ther extends the edit distance algorithm, and considers the probability of match with theFigure 2.11: Training N-gram Model (b) [1]Figure 2.12: Minimum Edit Distance EquationFigure 2.13: Bayes Probability Matching Equation-1dictionary word (using the above probability function), the frequency of occurrence of thedictionary word and the probabilities and frequencies of all the other dictionary words whencalculating the ranking score. The basic Bayes equation is:Where: P(t—o) is the probability that the true word is t given that the OCR word is o.P(o—t) is the probability that the OCR will output the word o when presented with thedictionary word t. (This is the probability of match function above.) P(t) is the frequencyof the word t in the dictionary, and P(o) is the frequency of the word o in the OCR outputspace.It may at first seem problematic to find P(o), which would need a huge database of OCRoutput to directly count the frequencies of each unique mistranslation. Fortunately, thequantity can be calculated by summing the probability of producing o from each true wordin the dictionary, weighted by the probability of that true word occurring in the affiliationstext. In mathematical terms,Figure 2.14: Bayes Probability Matching Equation-2where the ti are the words in the dictionary, and the sum is over all entries. It turns out thatthis is not even computationally expensive: P(o—ti) P(ti) is already being calculated forevery [o, ti ] pair since it is the numerator in the above equation, and it is only a matter ofkeeping a running sum as the function loops through all ti in the dictionary.Rule-based TechniquesRule-based methods are interesting. They work by having a set of rules that capture com-mon spelling and typographic errors and applying these rules to the misspelled word [13] .Intuitively these rules are the inverses of common errors. Each correct word generated bythis process is taken as a correction suggestion. The rules also have probabilities, makingit possible to rank the suggestions by accumulating the probabilities for the applied rules.Edit distance can be viewed as a special case of a rule-based method with limitation on thepossible rules.2.7 Near-Neighbor AnalysisAfter being processed by the lexicon filter the data then is passed through a filter called thenear-neighbor analyzer. Spell-checking is suggested using this filter [15] . After scanningthe text and extracting the words of the document, the words are compared with a list con-taining correctly spelled words-dictionary. In concept, this method sounds very similar tothat of</s>
|
<s>lexicon-filtering. In actuality, it is based on language specific algorithm that handlesmorphology. In a highly inflated language like Bengali, spell-checker must consider a wordin its various forms. For example, a word can be in its singular or plural form or in differentverbal forms. These different forms are originated from that specific word and clearly theseforms are inter-related. Morphology is the branch of linguistics that deals with this relationand can play quite an important role in the OCR system as different forms of the same wordcan occur in a document. To identify the relationship of a word in its original form and itsdifferent forms can help us a long way in developing efficient processing algorithms. So thenear-neighbor analyzer is an integral part of the post-processing engine of the OCR system.2.7.1 Rogets ThesaurusThe third edition electronic version of Rogets Thesaurus is composed of 990 sequentiallynumbered and named categories. There is a hierarchical structure both above and below thiscategory level. There are two structure levels above the category level and under each of the990 categories there are groups of words that are associated with the category heading given.The words under the categories are grouped under five possible grammatical classifications:noun, verb, adjective, adverb and preposition. These classifications are further subdividedinto more closely related groups of words. Some groups of words have cross-referencesassociated with them that point to other closely related groups of words. Figure 1 gives anexample of an extract within category 373 and the grammatical classification of noun, in thethesaurus. The cross-references are given by a numerical reference to the category numberfollowed by the title given in brackets.The thesaurus contains a collection of words that are grouped by their relation in meaning.Those words grouped together have a semantic relationship with each other and this infor-mation could be used to identify semantic relations between words. For example, a semanticrelationship between two words could be assumed if they occurred within the same categoryin the thesaurus.2.7.2 Thesaural ConnectionsThe application of the thesaurus for the identification of semantic relations between wordsrequired a means of determining what constitutes a valid semantic connection in the the-saurus between two words. For example, given words w1 and w2. Now the question is howcould the lexical organization of the thesaurus be exploited to establish whether a semanticrelation w1,w2 exists between themor not. Morris and Hirst identified five types of thesauralrelations between words based on the index entries of Rogets Thesaurus. For this approachfour types of possible connections between words in the thesaurus were identified for therepresentation of semantic relations between words by considering the actual thesaural en-tries. This ensured the inclusion of all words located in the thesaurus, for example, thosewords that form part of a multi-word thesaurus entry may not be represented in an indexentry. The connections that have been identified are considered between pairs of words andare outlined as follows:(1) Same category connection is defined as a pair of words both occurring under the samecategory.The words would be considered to be semantically related because they were foundwithin the same category,</s>
|
<s>where a category contains a group of associated words. This con-nection represents the strongest connection type of the four presented because the occur-rence of words within the same category indicates they are highly related and therefore havebeen grouped within the same area of the thesaurus.(2) Category to cross-reference connection occurs when a word has an associated crossreference that points to the category number of another word. Cross-references occur atthe end of semi-colon groups and point to other categories that closely relate to the currentgroup of words. Therefore, the words contained under the group of words a cross-referenceis pointing to are related to the current group of words that cross-reference is associatedwith.(3) Cross-reference to category connection can be described as the inverse of the previousconnection type given in (2). The cross-references associated with a word could be matchedwith the categories another word occurs under.(4) Same cross-reference connection is defined as the cross-references of two words point-ing to the same category number. The association of a cross-reference with a group of wordsindicates that the category the crossreference is pointing to contains words that are relatedto the current group [20] . Therefore, if two groups of words both have the same cross-references associated with them this implies that the words within these two groups couldalso be related.2.7.3 Error Correction of Bangla OCR Using Morphological ParsingBecause of the inflectional structure of surface words in the Bangla language, and becausethe errors in our OCR output are mostly single characters per word, the effort is limited tocorrecting a single character in a word. Single characters also include compound charactersthat are grapheme combinations of two or more basic characters.Let C be a character in the alphabet. A test is conducted a priori on a large set of data forperformance evaluation of the OCR, and we let D(C) denote the subset of characters whichmay be wrongly recognized as C. Members of D(C) may be called confusing characters(similarly shaped characters) for C. See Figure , where misclassification percentages ofcharacters with respect to their confusing characters are shown. Now, if a recognized stringW contains a character C, and if it is a mis-recognized character, then the correct word for Wis one of the strings obtained when C is replaced by elements of D(C). Strings obtained byreplacing the character C with the elements of D(C) may be called alternative or candidatestrings. We can attach a probability to each candidate string based on the a priori knowledgeof frequency of a confusing character.As it is not known which character in W is mis-recognized, candidate strings are generatedby replacing each character in W with its confusing characters. In generating the candidates,special care should be taken about run on, split and deletion errors also. Using this tech-nique, a complete set of candidates is generated. The candidate set thus formed is matchedin decreasing order of probability against either the root word lexicon or suffix lexicon. Thegrammatical agreement of the candidate words is also tested for validity. The valid wordsare called suggested words. Among these suggested words,</s>
|
<s>the first one is accepted by thesystem while other suggested words can be used by the user. If there is no match, the teststring is rejected.Figure 2.15: Some confusing character pairs in Bangla and their percentage of misclassifi-cation2.8 Grammar CheckGreater accuracy can be acquired if grammar of the language being scanned is taken intoconsideration. It helps determine whether the word is more likely to be a noun or a verb.A grammar checker is implemented either as a program or an integral part of a programand it is used to check the grammatical soundness of the written text in the context of thatparticular language. It is generally observed that a grammar checkers are implemented as apart of a larger program. But stand alone programs are also available.Natural processing of a language is used for the implementation of a grammar checker [15]. Collection of idiosyncratic errors is a formation of a large part grammatical definitions.This basically depends on the language of that specific system. And also of the knowledgeof that user on that particular language. The usefulness of this system can be fully realizedafter it is integrated to a word processing software. A grammar checker checks the syntacticsoundness of a sentence or some portion of it and notifies if any corrections are required tobe made. Preferably the system will also provide some linguistic explanation about whatactually is wrong with that sentence or that particular portion of it. Neither excessive noisenor silence is appreciated because we wouldn’t want any false alarms or an error goingunnoticed. After the check for grammatical correctness is done, the accuracy of the formedsentences increases.2.9 Pattern MatchingChecking an obtained sequence of tokens to check if the constituents of patterns are presentor not is known as pattern matching [15] . Unlike pattern matching where the patterns of aletter gets checked, here the sequence of tokens gets tested and unlike pattern recognitionthe match generally has to be near exact. Patterns generally are stored in a sequence or in theform of a tree. Notable applications of pattern matching include finding out the position of apattern in a given sequence of tokens. Also the system is used to output specific componentsof a sequence of tokens or replacing it with another token sequence. Obtaining labels or classcodes of a character pattern is the final aim of this system. The main task of recognition istaking a word or character pattern and mapping them to class sets that are predefined fromearlier tests and the training period.2.9.1 Bayes Decision TheoryThis refers to a decision theory that is informed beforehand by Bayesian Probability [21].This statistical system attempts at quantifying the chances among several possible decisions.It makes use of the costs and probabilities that it has acquired during the training steps.Bayesian decision theory implies that the distribution of any kind of probabilities representsa previous distribution.Assume that there are d feature measurements x1, . . . , xd which is extracted from the inputpattern. Then the pattern is represented by a d dimensional feature vector x = [x1, . .</s>
|
<s>. ,xd]T . We consider that x belongs to to one of the M predefined classes 1, . . . , M. Giventhe a priori probabilities P(i) and class conditional probability distributions p(x—i), i = 1, .. . , M, the a posteriori probabilities are computed by the Bayes formula [15] .2.10 Knowledge Driven ValidationA knowledge driven analyzer is used to carry out this particular validation process. Forthe optimization of the analyses text in a specific context a brute force searching is carriedout aided by an efficient search engine. The occurrence of certain string patterns withinga larger text is found out using the string searching algorithms. This algorithms, which arealso known as string matching algorithms are an important class of string algorithms [15] .Strings that are encoded can considerably effect the speed and efficiency of the algorithms.The use of a variable width encoding reduces slows the algorithms down to a great extent.It is much easier if we use a set of strings and then pass the complete set through the al-gorithm. The implementation of text as a set of strings makes the job easier for the searchengines. This consequently affects the search percentage which results in greater percentageof accuracy as well as efficiency of the OCR engine.2.10.1 Boyer-Moore String Searching AlgorithmThe Boyer-Moore algorithm for searching strings is considered to be the the most efficientamong the practical string search literature. The string that is being searched for the patternis preprocessed by this algorithm. But it leaves out the strings that are searched in the text.As a result it produces the optimum results where the text is considerably longer than thepattern or in cases where the pattern does not change even across numerous searches. Thisparticular algorithm uses the information that is achieved during the pre-processing steps sothat it can skip portions of the text. This results in lower constant factors when comparedto the other string algorithms making this a benchmark among its peers. The length of thepattern plays a great role in determining the speed of this algorithm as the speed increasesalong with the length of the pattern. The checking of context of the strings that have beenrecognized lends certainty about the context that the engine recognized.CHAPTER 3METHODOLOGY3.1 Initial scenario of our projectFrom the side of time, Bangla is a recent past when the research for this language hasbeen started. In this part of the world, Natural Language engineering is a much more newstory. So the advancement in computer technology took longer to have noticeable effect inthis region than the western world. At first it was main concern to recognize the Banglacharacters optically, the fifth-most popular language in the world. But at the time of doingthat some error occurred. For this automatic error detection and correction have becomea great challenge for present days. Two distinct error categories are categorized for worderrors, namely, non-word errors and real word errors. Non-words mean invalid words andreal word error means a valid word but not the perfect one for the sentences by making thesentence syntactically or</s>
|
<s>semantically ill-formed or incorrect. Thus in both cases, the maintarget is to detect the word error either using suggestion by drop down or automaticallyreplace it with an appropriate valid word. In our research field we assume that all sentencesare well formed so we only consider the non-word errors. As it is already told that we facedsome problem for doing OCR. One reason of that is complex character grapheme structureof Bangla script creates difficulty in error detection and correction. In bangle alphabet, thereare 11 vowels and 39 constants, which known as Basic characters. The basic characters areshown in Figure 3.1.Figure 3.1: Basic Characters of Bangla Alphabet (Vowels)But the vowels, depending on their position in a word, take different shapes, called modi-fiers. Compound characters are formed by two or more basic characters.Normally error can be detected in two ways. 1) n-gram verification and 2) dictionary look-up [12] . In the way of dictionary look-up, it was the main concern that to identify the eachletter. But the main problem was to identify the compound characters in the text. OtherFigure 3.2: Basic Characters of Bangla Alphabet (Consonants)Figure 3.3: Examples of mModifiers in Banglaalphabet can be identified easily than the compound characters. In OCR system, the text isscanned first and then the alphabets are recognized through some predetermined processes.While scanning the written text, if any alphabet can not be recognized properly then thealphabet is replaced by a group of letters (cluster).This type of bangla text image is converted into a grayscale format first then it is convertedinto some connected components.3.2 Lexicon FormationThus this type of problem occurred, some new methods or processes were needed to solvethis problem. For this, as an initiative step, the words were searched from a text file or worddictionary according to our given letter. The word is searched according to the given lettersFigure 3.4: Examples of Compound Characters in BanglaFigure 3.5: An Example of Digital Image of Bangla Textof the words. There can be some letter missing in the middle, front or back position in theword. But the word is searched by matching the string from the given word file. From thedrop down option the required word can be found out.For example, if we want to search any word by letter and then we type the letter then wewill find out the expected word from the drop down option if this word is stored in the textfile. Figure 3.7 is the example of the this.3.3 Training The SystemFor predicting words first we had to separate all the words from the text. For doing thisseparation perfectly, we show a new word with frequency 1. If we get any word which wehave got earlier then we will just increase the frequency of the word, no new word will beshown for that.Figure 3.6: Selected Connected ComponentsFigure 3.7: Example of Search Results3.4 Formation of Root Word LexiconFor predicting the unknown or error word, a new method is applied to our project. Besidesthe separated current word we also store the previous and post word of the</s>
|
<s>current wordwith their frequencies. It helps to find out that which word has more possibility to be withthe current word. Actually the main concept is to find out the correct word instead of theerror word with the help of the post and previous word of the current word. Following is thedetails process or concept of the whole process.While find out the previous and post word we use a list named Lexicon. The lexicon isconstructed with a list of previous word, a list of post word, an integer for recording thefrequency of the current word and a string named Current where the current word is stored.In Figure 3.9 the structure of the lexicon is shown.Here we first find out the current word whether it is listed previously or not. If not then itwill be listed and its frequency will be also set. Then the previous word of that word will belisted in previous word list and the post word will be in post word list with their frequencyof that current word. And if the current word is stored previously then no new word isstored in lexicon. In that case, there a search takes place whether the previous/post word isstored earlier for that current word or not. If they not then they are stored in previous/post listotherwise the frequency of that corresponding previous or post words frequency is increased.Figure 3.8: First Step of Our ProjectFigure 3.9: Structure of Lexicon3.5 Formation of Previous/Post WordThe structure of previous and post word list is quite similar to the lexicon. In previous postword list, the components are the string of the word and the frequency of the word. In thiscase, similar to lexicon, if we get a new previous or post word for the current word then weinsert a new word. Otherwise the frequency of the word is just increased. In Figure 3.10 thestructure is shown.Figure 3.10: Structure of Previous/Post Word3.6 Text File Implementation of LexiconBy doing so a text file can be divided into current word and its previous and post word listand their frequencies. In Figure 3.11, a text is written in bangle Unicode and in Figure 3.12this text is separated into current words, previous words and post words.Figure 3.11: Text File for ProcessingFigure 3.12: Processed Text File3.7 Database Implementation of LexiconAn important aspect in the usability of a Lexicon is its (technical) appearance. Many small,experimental systems have been built on stand-alone personal computers, using tools thatwere quite appropriate for the scientific project targets but not to support practical applica-tions in a production environment. The Lexicon design as described in this paper is aimedat large-scale collaborative production, providing multi-user access and high performanceon an appropriate platform. It can be integrated in external applications when used as a net-work server, and just as with traditional database systems, will be shielded from casual ornon-technical / non-linguist users with appropriate front ends [22] . It can be applicable fora specific type of file that means a fixed context type database or text file as Bangla languageis a</s>
|
<s>vast area for predicting misspelled words.Bangla lexicon development can be described as the continuous interaction of three layersof functions. They are depicted below:The Model View Controller (MVC) architecture is main structure in lexicon frameworkdevelopment. The MVC has three parts: the model, the view, the controller. These threecan be considered as the processing, the output, the input. Input–>Processing–>OutputFigure 3.13: Three Layer FunctionController –> Model –> ViewThe user input, the modeling of the external world, and the visual feedback to the user areseparated and processed by model. The controller interprets mouse and keyboard inputsfrom the user and maps these user actions into commands that are sent to the model. Themodel manages one or more data elements, responds to queries about its state, and respondsto instructions to change state [22] .The model in the lexicon development project consists of the MySQL containing lexicondata and the c# which will be used to process user input and output formatting. The lexicondevelopment interface will have a web based view and a standalone view. Both the viewwill be used for entering data into the lexicon. The web based interface can be used byall people, so the data that is entered through the web interface will not be directly enteredinto the final lexicon file. Instead they will be stored into a primary MySQL for furthervalidation for correctness and redundancy by specialists. After correction the relevant datawould be stored into the final MySQL lexicon file and hence the lexicon will be updated.The intermediate web technology will be controlling the interaction of the model and viewcomponents. The standalone interface is a form based interface that allows a user to enternecessary data directly into a MySQL file. In our database there are three tables namelyword, prevword, postword. Word has the structure of the lexicon which has been shownbefore in the structure of lexicon and prev/post words (Figure 3.9, Figure 3.10). In Figure3.14 the table of our database is shown:3.8 Detection of Faulty WordIn presentation layer we put the input constructed with faulty word. Then at first the faultyword is marked among it. Then we search and find out the previous and post word of thefaulty word. According to the frequency and comparability the most used and possible isfind out. If there is a tie or complexity to find out the most used word then we discuss aboutFigure 3.14: Database of Our ProjectFigure 3.15: MVC Architecture of Digital Lexiconthe previous to previous word of the faulty word and same as post words. In Figure 3.16 atext file is shown with faulty words and in Figure 3.17 the result of finding faulty words inout project is shown.In this case the faulty letter of the misspelled word was indicated by *. But if misspelledword is not marked then we will identify the misspelled word by searching each word ofdocument in our database. We will match the letter of the words of the document with thewords of the database. Then according to the matching level of two strings we gave everyword a confidence percentage. If</s>
|
<s>any words confidence level crosses the threshold value(assumed) then it will be considered as faulty or misspelled word.Figure 3.16: A Text File with Faulty WordsFigure 3.17: Results of Finding Faulty Words from The Text3.9 Correction of Faulty WordAfter finding the faulty words, based on the previous and post words the correct word vanbe found out. If there arises more complexity to find out this just based on the immediateprevious and post words then the more backward process is required. After applying thisprocess we can find out the faulty words thus reach nearly to our goal.Figure 3.18: Corrected Words ListThe whole procedure can be described through a framework [23] . In Figure 3.19 the totalframework is shown-Figure 3.19: Framework of The Process [23]CHAPTER 4RESULTS AND ANALYSIS4.1 Experiment the resultsThere are many spell checker is available for checking the spelling of the misspelled wordsof different languages. For bangla, bangle word checker is designed such a way to get thehigher efficiency. So for testing the ability of correcting the word errors, we conductedseveral experiments. As it has been mentioned before that our system always work for afixed type of documents at a time, based on the lexicon word type as Bangla is a vast areafor predicting words or correcting misspelled words depending on the prefix and suffix ofthe word. Following are some cases showing the screen shot and result tables for that severalcases.Case-1: Documents related to Sports (Cricket)Here in Figure 4.1 a document is shown which is related to International Cricket.Figure 4.1: Text Document about International CricketThen in Figure 4.2 some error or misspelled word is given for testing the system. As ourconcern was international cricket so we stored words related this to our database. Therewere 9 misspelled words. In Figure 4.3 we can see that our system corrected 8 words amongFigure 4.2: Text Document with Misspelled Words of Fiugre 4.1them and 1 word remained misspelled.Figure 4.3: Corrected Text File by the System of Fiugre 4.2In Figure 4.3 the blue marked words are correctly predicted by the software while the redsigned word was not predicted correctly. Later a Table 4.1 is given showing the accuracypercentage of our system.Case-2: Documents related to Politics Here in Figure 4.4 a document is shown which isrelated to Politics.Then in Figure 4.5 some error or misspelled word is given for testing the system. As ourFigure 4.4: Text Document about Politicsconcern was politics so we stored words related this to our database. There were 11 mis-spelled words. In Figure 4.6 we can see that our system corrected 8 words among them andFigure 4.5: Text Document with Misspelled Words of Fiugre 4.41 word remained misspelled.In Figure 4.6 the blue marked words are correctly predicted by the software while the redsigned word was not predicted correctly. Later a table 4.1 is given showing the accuracypercentage of our system.Case-3: Documents related to International Affairs Here in Figure 4.7 a document isFigure 4.6: Corrected Text File by The System of Fiugre 4.5shown which is related to International Affairs.Figure 4.7: Text Document about International</s>
|
<s>AffairsThen in Figure 4.8 some error or misspelled word is given for testing the system. As our con-cern was politics so we stored words related this to our database. There were 9 misspelledwords.In Figure 4.9 we can see that our system corrected 8 words among them and 2 word remainedmisspelled.In Figure 4.9 the blue marked words are correctly predicted by the software while the redFigure 4.8: Text Document with Misspelled Words of Fiugre 4.7Table 4.1: Accuracy percentage of our system based on the casesDocuments on Number of Misspelled Word Number of Corrected Words Accuracy %Sports (Cricket) 9 8 88.89%Sports (Football) 9 9 100%Politics 11 10 90.09%International Affairs 7 5 71.43%signed word was not predicted correctly. Later a Table 4.1 is given showing the accuracypercentage of our system.Case-4: Documents related to International Sports (Football) Here in Figure 4.10 adocument is shown which is related to International Sports (Football).Then in Figure 4.11 some error or misspelled word is given for testing the system. Asour concern was politics so we stored words related this to our database. There were 7misspelled words. In Figure 4.12 we can see that our system corrected all the 7 words.In Figure 4.12 the blue marked words are correctly predicted by the software while the redsigned word was not predicted correctly. Later a table 4.1 is given showing the accuracypercentage of our system. Following is the table calculating the accuracy percentage of thesystem:Figure 4.9: Corrected Text File by The System of Fiugre 4.8Figure 4.10: Text Document about International Sports (Football)Figure 4.11: Text Document with Misspelled Words of Fiugre 4.10Figure 4.12: Corrected Text File by The System of Fiugre 4.11CHAPTER 5CONCLUSIONAlthough Bangla is one of the most popular languages in the world, research regardingthe processing of this language is still largely unsatisfactory. To keep pace with the everadvancing technology, digitization of Bangla language is of utmost importance. If we failin this task, it will be very difficult to maintain the relevance & popularity of the languageBangla in the future. Its a matter of great hope and inspiration that this particular fieldmeaning Bangla language development is now a priority project of our government also.Government has taken initiative by incorporating properly funded projects in this regard.Development of an accurate and efficient spell checker will go a long way in ensuring proper,efficient and accurate digitization of Bangla language. But for issues discussed in the abovesections, it is very complicated to develop a Bangla spell checker if the correction methodis based on the errors committed on the character level. Thus, it is our objective to developa Bangla spell checker whose correction methodology is based on words, with the aid of aunique kind of lexicon. The success of our endeavor will contribute greatly in correcting themisspelled words in text files which in turn will be a great boon for the digitization systemsbeing developed for the Bangla language.5.1 Limitation of the systemAs every system has lackings, our system is not exceptional of that. Bangla is a vast languageso many of the times the consequences of the</s>
|
<s>words depend on the sepecific type of thedocument. For this, at a time a specific type of documents can be processed by our system.Another limitation of our system is that it can not predict two successive misspelled words.We are predicting words in our system with the help of the previous and post words ofthe misspelled words. Thus in our system it is impossible to correct any two successivefaulty words. In our system we have to conduct with a relatively large lexicon. A moreshopisticated algorithm can be used for this system.5.2 Future expansion of the systemThe proposed system here is just concern about the spelling of the word comparing withjust the previous and post words of the word. In future it can be expanded as to predictthe words with the help of the meaning of the sentences. Besides more extensive evalua-tions will provide the statistical information needed to manage the suffix list, which in turnwill determine the trade-off between under-stemming and over-stemming. And now we areworking just with bengla language. Hope near future this can help to predict the Bengalissister languages such as Assamese and Oriya.REFERENCES[1] B. S. M. H. R. Nur Hossain Khan, Gonesh Chandra Saha, “Checking the correctness ofbangla words using n-gram(975-8887),” Computer Applications, vol. 89, no. 11, 2014.[2] M. K. Naushad UzZaman, “A comprehensive bangla spelling checker,”[3] M. K. Md. Zahurul, Md. Nizam Uddin, “A light weight stemmer for bengali and itsuse in spelling checker,”[4] W. H. W. Kyongho Min, “Syntactic recovery and spelling correction of iii-formedsentences,”[5] “Natural language processing.” Last accessed on December 17, 2014, at 02:08:00PM.[Online]. Available: http://en.wikipedia.org/wiki/Natural language processing.[6] “Information retrieval.” Last accessed on December 16, 2014, at 02:08:00PM. [On-line]. Available: http://en.wikipedia.org/wiki/Information retrieval.[7] “Information extraction.” Last accessed on December 16, 2014, at 02:08:00PM. [On-line]. Available: http://en.wikipedia.org/wiki/Information extraction.[8] “Natural language generation.” Last accessed on December 17, 2014, at 02:08:00PM.[Online]. Available: http://en.wikipedia.org/wiki/Natural language generation.[9] “Speech recognition.” Last accessed on December 17, 2014, at 02:08:00PM. [Online].Available: http://en.wikipedia.org/wiki/Information extraction.[10] “Optical character recognition.” Last accessed on December 16, 2014, at 02:08:00PM.[Online]. Available: http://en.wikipedia.org/wiki/Information extraction.[11] K. F. John Evershed, “Correcting noisy ocr: Context beats confusion.”[12] Y.-H. Tseng, “An approach to retrieval of ocr degraded text,” Library Science, no. 13,pp. 1018–3817, 1998.[13] P. M. Neha Gupta, “Spell checking techniques in nlp,” International Journal of Ad-vanced Research in Computer Science and Software Engineering, vol. 2, no. 12, 2012.[14] M. K. Md. Tamjidul Hoque, “Coding system for bangla spell checker,” A Survey,no. 12, 2013.[15] D. S. P. Y. Piyush Sudip Patel, “Post-processing on optical character recognition,”IJAEA, vol. 2, no. 6, pp. 71–76, 2009.[16] “Digital search tries.” Last accessed on December 14, 2014, at 02:08:00PM. [Online].Available: www.lsi.upc.edu/.[17] B. C. U. Pal, P.K. Kundu, “Ocr error correction of an inflectional indian language usingmorphological parsing,” Information Science And Engineering, no. 16, pp. 903–922,2000.[18] S. E. H. Thomas A. Laskoa, “Approximate string matching algorithms for limited-vocabulary ocr output correction,” no. 16, 2000.[19] N. K. Ritika Mishra, “A survey of spelling error detection snd correction techniques,”Computer Trends and Technology, vol. 4, no. 3, 2013.[20] L. E. N. S. A.C.</s>
|
<s>Jobbins, G. Raza, “Post processing for ocr: Correcting errors usingsemantic relations,”[21] “Bayesian decision theory.” Last accessed on December 13, 2014, at 12:08:00AM.[Online]. Available: www.wikipedia.org/.[22] F. M. S. D. M. K. Dewan Shahriar Hossain Pavel, Asif Iqbal Sarkar, “Collaborativelexicon development for bangla,” no. 12, pp. 1–7, 2014.[23] D. M. Siyuan Chen and G. R. Thoma, “Efficient automatic ocr word validation usingword partial format derivation and language model,” Soft Computing, vol. 14, no. 12,pp. 1329–1337, 2010. CERTIFICATION CANDIDATES' DECLARATION ACKNOWLEDGEMENT ABSTRACT List of Figures List of Tables List of Abbreviation INTRODUCTION Overview of Bangla spelling Checker What is Bangla Spelling Checker? Stemmer use in Bangla Spelling Checker How we use the Bangla Spelling Checker LITERATURE REVIEW Natural Language Processing Major Tasks in NLP Levels of Natural Language Processing Noisy Optical Character Recognition and Bangla Language Degraded Text Types of Spelling Error Post-Processing Font Engine Lexicon Filter Structure of a Lexicon Two Tree Implementation of Bangla Lexicon Generation of Suspicious Words Word Partial Format Derivation Approximate String Matching Methods Near-Neighbor Analysis Roget╎s Thesaurus Thesaural Connections Error Correction of Bangla OCR Using Morphological Parsing Grammar Check Pattern Matching Bayes Decision Theory Knowledge Driven Validation Boyer-Moore String Searching Algorithm METHODOLOGY Initial scenario of our project Lexicon Formation Training The System Formation of Root Word Lexicon Formation of Previous/Post Word Text File Implementation of Lexicon Database Implementation of Lexicon Detection of Faulty Word Correction of Faulty Word RESULTS AND ANALYSIS Experiment the results CONCLUSION Limitation of the system Future expansion of the system References</s>
|
<s>untitledSee discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/341158124Intrinsic Evaluation of Bangla Word EmbeddingsConference Paper · September 2019DOI: 10.1109/ICBSLP47725.2019.201506CITATIONSREADS5 authors, including:Some of the authors of this publication are also working on these related projects:Video Category Classification using Wireless EEG Devices View projectHuman Activity Recognition View projectNafiz SadmanSilicon Orchard LTD7 PUBLICATIONS 3 CITATIONS SEE PROFILEAkib SadmaneeIndependent University, Bangladesh1 PUBLICATION 0 CITATIONS SEE PROFILEMd Ashraful AminIndependent University, Bangladesh85 PUBLICATIONS 526 CITATIONS SEE PROFILEAll content following this page was uploaded by Akib Sadmanee on 16 July 2020.The user has requested enhancement of the downloaded file.https://www.researchgate.net/publication/341158124_Intrinsic_Evaluation_of_Bangla_Word_Embeddings?enrichId=rgreq-106c4825a07e501951e54afd36996f04-XXX&enrichSource=Y292ZXJQYWdlOzM0MTE1ODEyNDtBUzo5MTM5Mjk5MDM1NTg2NTZAMTU5NDkwOTI3NzAyMA%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/341158124_Intrinsic_Evaluation_of_Bangla_Word_Embeddings?enrichId=rgreq-106c4825a07e501951e54afd36996f04-XXX&enrichSource=Y292ZXJQYWdlOzM0MTE1ODEyNDtBUzo5MTM5Mjk5MDM1NTg2NTZAMTU5NDkwOTI3NzAyMA%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Video-Category-Classification-using-Wireless-EEG-Devices?enrichId=rgreq-106c4825a07e501951e54afd36996f04-XXX&enrichSource=Y292ZXJQYWdlOzM0MTE1ODEyNDtBUzo5MTM5Mjk5MDM1NTg2NTZAMTU5NDkwOTI3NzAyMA%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Human-Activity-Recognition-31?enrichId=rgreq-106c4825a07e501951e54afd36996f04-XXX&enrichSource=Y292ZXJQYWdlOzM0MTE1ODEyNDtBUzo5MTM5Mjk5MDM1NTg2NTZAMTU5NDkwOTI3NzAyMA%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-106c4825a07e501951e54afd36996f04-XXX&enrichSource=Y292ZXJQYWdlOzM0MTE1ODEyNDtBUzo5MTM5Mjk5MDM1NTg2NTZAMTU5NDkwOTI3NzAyMA%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Nafiz_Sadman?enrichId=rgreq-106c4825a07e501951e54afd36996f04-XXX&enrichSource=Y292ZXJQYWdlOzM0MTE1ODEyNDtBUzo5MTM5Mjk5MDM1NTg2NTZAMTU5NDkwOTI3NzAyMA%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Nafiz_Sadman?enrichId=rgreq-106c4825a07e501951e54afd36996f04-XXX&enrichSource=Y292ZXJQYWdlOzM0MTE1ODEyNDtBUzo5MTM5Mjk5MDM1NTg2NTZAMTU5NDkwOTI3NzAyMA%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Nafiz_Sadman?enrichId=rgreq-106c4825a07e501951e54afd36996f04-XXX&enrichSource=Y292ZXJQYWdlOzM0MTE1ODEyNDtBUzo5MTM5Mjk5MDM1NTg2NTZAMTU5NDkwOTI3NzAyMA%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Akib_Sadmanee?enrichId=rgreq-106c4825a07e501951e54afd36996f04-XXX&enrichSource=Y292ZXJQYWdlOzM0MTE1ODEyNDtBUzo5MTM5Mjk5MDM1NTg2NTZAMTU5NDkwOTI3NzAyMA%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Akib_Sadmanee?enrichId=rgreq-106c4825a07e501951e54afd36996f04-XXX&enrichSource=Y292ZXJQYWdlOzM0MTE1ODEyNDtBUzo5MTM5Mjk5MDM1NTg2NTZAMTU5NDkwOTI3NzAyMA%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Independent_University_Bangladesh?enrichId=rgreq-106c4825a07e501951e54afd36996f04-XXX&enrichSource=Y292ZXJQYWdlOzM0MTE1ODEyNDtBUzo5MTM5Mjk5MDM1NTg2NTZAMTU5NDkwOTI3NzAyMA%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Akib_Sadmanee?enrichId=rgreq-106c4825a07e501951e54afd36996f04-XXX&enrichSource=Y292ZXJQYWdlOzM0MTE1ODEyNDtBUzo5MTM5Mjk5MDM1NTg2NTZAMTU5NDkwOTI3NzAyMA%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Ashraful_Amin?enrichId=rgreq-106c4825a07e501951e54afd36996f04-XXX&enrichSource=Y292ZXJQYWdlOzM0MTE1ODEyNDtBUzo5MTM5Mjk5MDM1NTg2NTZAMTU5NDkwOTI3NzAyMA%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Ashraful_Amin?enrichId=rgreq-106c4825a07e501951e54afd36996f04-XXX&enrichSource=Y292ZXJQYWdlOzM0MTE1ODEyNDtBUzo5MTM5Mjk5MDM1NTg2NTZAMTU5NDkwOTI3NzAyMA%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Independent_University_Bangladesh?enrichId=rgreq-106c4825a07e501951e54afd36996f04-XXX&enrichSource=Y292ZXJQYWdlOzM0MTE1ODEyNDtBUzo5MTM5Mjk5MDM1NTg2NTZAMTU5NDkwOTI3NzAyMA%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Ashraful_Amin?enrichId=rgreq-106c4825a07e501951e54afd36996f04-XXX&enrichSource=Y292ZXJQYWdlOzM0MTE1ODEyNDtBUzo5MTM5Mjk5MDM1NTg2NTZAMTU5NDkwOTI3NzAyMA%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Akib_Sadmanee?enrichId=rgreq-106c4825a07e501951e54afd36996f04-XXX&enrichSource=Y292ZXJQYWdlOzM0MTE1ODEyNDtBUzo5MTM5Mjk5MDM1NTg2NTZAMTU5NDkwOTI3NzAyMA%3D%3D&el=1_x_10&_esc=publicationCoverPdfInternational Conference on Bangla Speech and Language Processing(ICBSLP), 27-28 September, 2019 978-1-7281-5242-4/19 ©2019 IEEE Intrinsic Evaluation of Bangla Word Embeddings Nafiz Sadman* Computer Science & Engineering Independent University, Bangladesh Dhaka, Bangladesh nafizsadmanpreetom@gmail.com Md. Ashraful Amin Computer Science & Engineering Independent University, Bangladesh Dhaka, Bangladesh aminmdashraful@iub.edu.bd Akib Sadmanee* Computer Science & Engineering Independent University, Bangladesh Dhaka, Bangladesh akibsadmanee@gmail.com Md. Iftekhar Tanveer Comcast Applied AI Lab Washington DC 20005. USA mdiftekhar_tanveer@comcast.com Amin Ahsan Ali Computer Science & Engineering Independent University, Bangladesh Dhaka, Bangladesh aminali@iub.edu.bd Abstract—Word embeddings are vector representations of word that allow machines to learn semantic and syntactic meanings by performing computations on them. Two well-known embedding models are CBOW and Skipgram. Different methods proposed to evaluate the quality of embeddings are categorized into extrinsic and intrinsic evaluation methods. This paper focuses on intrinsic evaluation - the evaluation of the models on tasks, such as analogy prediction, semantic relatedness, synonym detection, antonym detection and concept categorization. We present intrinsic evaluations on Bangla word embedding created using CBOW and Skipgram models on a Bangla corpus that we built. These are trained on more than 700,000 articles consisting of more than 1.3 million unique words with different embedding dimension sizes, e.g., 300, 100, 64, and 32. We created the evaluation datasets for the above-mentioned tasks and performed a comprehensive evaluation. We observe, word vectors of dimension 300, produced using Skipgram models, achieves accuracy of 51.33% for analogy prediction, a correlation of 0.62 for semantic relatedness, and accuracy of 53.85% and 9.56% for synonym and antonym detection 9.56%. Finally, for concept categorization the accuracy is 91.02%. The corpus and evaluation datasets are made publicly available for further research. Keywords— Word embedding, CBOW, Skipgram, Intrinsic evaluation, Bangla corpus I. INTRODUCTION Word embeddings are numerical representations of the words constructed from the “distributional hypothesis” [1], which defines that words that occur in the same context tend to exhibit similar semantic meaning. It can capture the semantic and syntactic structures of words and is found to improve performances in a number of Natural Language Processing (NLP) tasks [2]. These vectors have many features which can be used in applications ranging from information retrieval, document classification to question answering, named entity recognition, and parsing. There exists a number of word embedding techniques, e.g., word2vec [3], GloVe [4], C&W [5], H-PCA [6], TSCCA [7], and Sparse random projection [8]. In [3] Mikolov et al. introduced two of the most popular embedding methods – word2vec using Continuous</s>
|
<s>Bag of Words (CBOW) and Skipgram models. CBOW model predicts a word from the context of that word. On the other hand, Skipgram model aims to predict neighboring words from a given word. Despite the vast use of these word embeddings, the main importance is defined by the “linguistic regularities and patterns” which the vectors encode [2]. These can be represented as linear translation of the embedding vectors. One popular example is vec(“রাজা”) + vec(“নারী”) - vec(“পু ষ”) is closer to vec(“রাণী”). Thus the quality of a word embedding method can be defined on the contextual similarity of vector representations of two words. There are two approaches to understand the quality of word embedding models, namely extrinsic and intrinsic evaluations. In extrinsic evaluations one measure the quality by measuring the performance matrices for specific downstream natural language processing tasks that uses the word embeddings. On the other hand, in intrinsic evaluations semantic and syntactic relationship between words are directly measured [9]. For example, in [9], the extrinsic evaluation is done by measuring the performance in two downstream tasks (namely, Noun phase chunking and Sentiment classification) using a pre-trained word embeddings. As for intrinsic evaluation, Baroni et al. [10] defined five benchmark tasks - semantic relatedness, synonym detection, selectional preferences, concept categorization and completing analogy. Yin et al. [5] has also demonstrated intrinsic evaluation on word relatedness. However, to our knowledge, no definite dataset or evaluation protocol exist for the intrinsic evaluation of Bangla word embeddings. There has been some recent work on word embedding applied to downstream tasks and only a few on evaluating word embeddings. For example, Ismail et al. [11] evaluated the performance of different word embedding models in terms of training time and cluster quality. However, no detailed description was provided about the procedure of the experiments and the evaluation dataset is not available. In [12], the authors evaluated sentence level embedding. They chose to average the word vectors for the words present in a sentence to generate the sentence level embedding and compare these with embeddings for another sentence. But this work does not evaluate word embeddings. Different Bangla word embedding models have been used to perform several downstream language processing tasks e.g. evaluating effects on Authorship Attribution of Bangla Literature [13], document classification [14]-[15], Sentiment Analysis [16]-[17], sentence classification [18], word level language identification in [19] and Named entity recognition [20]. All these works only addresses extrinsic evaluations of downstream tasks on Bangla word embeddings. However, performing intrinsic evaluation is important. Schnabel et al. point out in [9], although extrinsic evaluation can show the performance of word embedding for certain tasks, it cannot be * Both authors contributed equally in this paper used to evaluate the general quality of word embedding models. In this paper, we therefore, present intrinsic evaluation for Bangla word embeddings. The methodology closely follows the evaluation methodology presented in [9]. However, there are a couple of challenges that need to be solved. First, there exists no evaluation datasets for Bangla. Also,</s>
|
<s>there exists no pre-trained word embeddings for Bangla. Our goal is to close these gaps in Bangla NLP research. The contributions of our paper are as follows: • We propose five intrinsic evaluation datasets and make them publicly available for future research. These consists of Bangla translations of familiar datasets such as wordsim353, portion of Analogy dataset of Mikolov et al., as well as new datasets for Concept categorization, Synonym detection and Antonym detection tasks. • We provide a Bengali corpus which is built with web-scraped data from different sources, mainly blogs or Bangla-Wikipedia. These are available at our github repository1. • We use the python framework gensim2 to construct word embedding using CBOW and Skipgram model and provide a comprehensive evaluation of the embedding. II. CONSTRUCTING THE CORPUS In this section we describe in detail the Bangla corpus that we constructed. Our corpus comprises of public dataset and web scraped data. We scraped blog posts, Bangla Wikipedia, e-books on Bangla novels, stories etc. and used a public dataset available in Kaggle [21] that contains articles of Bangla newspapers. Though a large portion of the data came from a single blog, it encapsulates writings on 24 different categories, i.e., story, poem, book review, higher education, etc. Table I presents the details of the data that we used to create our corpus. As we can observe the majority of articles are blog posts. TABLE I. DETAILS OF THE DATASETS USED FOR CORPUS BUILDING Data Proportion (%) 40k_bangla_newspaper_article 6.93 Bangla novels, story books 2.75 Bangla Wikipedia 5.84 Somewhere-in-Blog post 75.05 Sachalayatan Blog post 9.43 The raw datasets are in pickle format (.pkl) which we aggregated and cleaned for processing. The total number of sentences in the corpus is 39,500,468. We used some filters to remove special characters and English words & characters. We replaced the Bangla numbers in the datasets with “<NUM>” tag as numbers themselves do not carry any meaning. After preprocessing the corpus contains a total of 3,577,910 words. Among these 2,240,897 words are discarded due to single occurrences and our model uses the rest of the 1,337,032 words to create the model. In comparison, in [11], Ismail et al. used 521,391 unique words to generate the word embeddings. To our knowledge, our dataset is the largest publicly available Bangla corpus. However, it should be noted that in comparison, Mikolov et al. has used 1.6 billion words to train their Skipgram and CBOW models for English [3]. We also checked whether our dataset conserved the Zipfian distribution [22]. The average frequency count of our corpus is 192.26 and the standard deviation is 17,582.45. The most frequent word occurs 6.28% times of the total occurrences while 2nd most frequent word occurs 3.39% and 3rd most frequent word occurs 1.21% times of the total occurrences. Our corpus thus complies to the Zipf’s law. III. EVALUATION METHODS In this section we propose datasets for five evaluation methods, namely, analogy prediction, semantic relatedness, synonym detection, antonym detection, and concept categorization for our Bangla word</s>
|
<s>embeddings. Below we describe the datasets and the evaluation methodology in detail. A. Analogy Prediction In analogy prediction, based on the semantic relation between two words (Word1 and Word2), we predict a word (Word4) that has similar semantic relation with another given test word (Word3). The prediction is done using the following equation: Word4 = Word1 + Word3 – Word2 (1) Our evaluation dataset (Analogy_bn) was adopted from Mikolov et al. [2] which was translated by two university student volunteers. Our evaluation dataset contains 2,376 combination of Bangla words, from which only 2,102 combinations exist in our vocabulary. Therefore, we evaluate the embedding using these 2,102 combinations. Each row of our dataset is composed of 4 space-separated words where we use the first two words to understand the relation between the words. We predict on the basis of the 3rd word and the 4th word is the given as answer-word that we want to predict. We can use the 4th word for testing the accuracy of our prediction. Given a pair of words “সুইজারল া ” (Switzerland) & “সুইস” (Swiss), a predicted word related to a test word “আয়ারল া ” (Ireland) should be “আইিরশ” (Irish), because “সুইজারল া ” Switzerland and “সুইস” Swiss have the country-nationality relationship, therefore our third word “আয়ারল া ” (Ireland) and the predicted word should also conserve the country-nationality relationship. This relationship should be captured in the word embedding vectors when we compute them using equation (1). In other words, if we subtract the vector representation of “সুইজারল া ” (Switzerland) and that of “সুইস” (Swiss) and add it with the representation of “আয়ারল া ” (Ireland), we should get a vector having a high cosine similarity with the vector representation of “আইিরশ” (Irish). When we actually compute the vector, we find that the most similar word is not the expected word. But most of the time, the predicted words exist within 5 most similar words. Therefore, we pick 20 most similar words in the vocabulary according to the cosine similarity between those words and choose closest k words that are not in the set of given words or not spelling variants of them. We consider a prediction to be correct if the given answer matches any of the k chosen predictions. We experimented with different values of k ranging from 1 to 20. We calculate the accuracy of the evaluation by checking how many predictions match the given answer. 1. https://github.com/shabdakuhok/Evaluation-datasets-for-Bangla-Word-Embedding 2. https://radimrehurek.com/gensim/models/word2vec.html B. Semantic Relatedness Semantic relatedness task evaluates a word embedding based on the degree of semantic similarity between two words. Our semantic relatedness evaluation dataset (Wordsim_bn) was created by translating Wordsim353 dataset [5]. Two university student volunteers did the translation. Our dataset named, comprises of 353 rows each containing 2 words separated by tab and the average of the rating of semantic relatedness assigned by 8 human annotators from two age groups. The first age group consists of 6 undergraduate students with aging from 21 to 25 and the second</s>
|
<s>age group consists of one engineer and one High school Bangla teacher respectively, aging from 40 to 60. To evaluate the embedding, we calculate the cosine similarity as the measure of semantic relatedness between the corresponding vectors of the words pairs. We compute the correlation between the average human- annotated ratings and the computed cosine similarities. We use both Pearson correlation and Spearman rank correlation measures for this purpose. C. Synonym Detection This method predicts a word that represents the same semantic meaning as the target-word. We produced the evaluation dataset called “Synonym_bn” from web scraped questions for Bangladesh Civil Service (BCS) examination. The dataset consists of 86 sets of words each containing 6 words separated by commas. The 1st word is the target word. The latter 4 words are the synonym candidates. The last word is the given answer that we want to predict. However, 65 sets of words exist in our vocabulary. We calculate the cosine similarity between the target word and the 4 candidate words. The word with the largest similarity value is chosen to be the synonym of the target word. For example, we find the cosine similarity of the target word “সূয” (Sun) with the 4 given options for synonym, “সুধাংশু” (Moon), “শশাংক” (Moon), “িবধ”ূ (Moon) and “আিদত ” (Sun). Here, if the maximum similarity is found with the word “আিদত ” (Sun) then the prediction is accurate. D. Antonym Detection Similar to Synonym detection we also created a dataset for antonym detection. We scraped the web for antonym MCQ questions for our dataset called “Antonym_bn”. The dataset contains 172 rows each with six comma-separated words. Like our synonym dataset format, the first word is the target word, 4 word are antonym candidates and the last word is the answer. However, 136 rows of words exist in our vocabulary. We calculate the cosine similarity between the target word and the four candidate words. The word with the smallest cosine similarity is our predicted word, e.g. ‘মুk’ (Free) is our target word, and “sাধীন” (Independent), “বািহর” (Open), “বd” (Closed), “মুিk” (Freedom) are our candidates. The minimum similarity is found with the word “বd” (Closed), therefore it is selected. We get the accuracy by comparing our predicted word with the given word. If the prediction matches the given answer, we call it a successful prediction. E. Concept Categorization In concept categorization, the task is to cluster a set of words into several semantic categories where they should conceptually belong. We created the “Concept_bn” dataset for this purpose. The dataset contains 3 comma separated values where the 1st value is the category ID, second value represents the category, and the third value represents concepts. We have 78 words in our dataset which are clustered into 6 different categories, namely mental state, body action, vehicle, occupation, animal, and change of state. On average there are 16 words in each category. We implemented the k-means clustering algorithm on the “Cluster_bn” dataset. Since there are 6 categories in our dataset we</s>
|
<s>use k = 6. The idea is to see whether the word vectors corresponding to the words from the same category fall into the same clusters. Each word belongs to a specific cluster and we determine the ID of the cluster by checking which words of a category occur the maximum number of times in the cluster. We count the words that appear in the correct cluster. We consider a word belonging to a correct cluster if the given Cluster ID of the word and the calculated cluster ID of the cluster are the same. IV. RESULTS AND DISCUSSION We proceed to use these 4 different models to perform the evaluation tasks of Analogy Prediction, Semantic Relatedness, Synonym Detection, Antonym Detection and Concept Categorization. The performances of the models are presented in the tables below, where CB represents CBOW model, SG represents Skipgram model and d represents different word embedding dimension sizes. The first observation from our experiments is that, as expected, the performance of the model for the different tasks tends to deteriorate as we reduce the dimension. However, surprisingly, in a number of tasks the performance of 100-dimensional word vectors is similar to that of 300 dimensional vectors. Table II represents the accuracy in percentage of the analogy prediction task. Note, total number of analogy word combinations existing in the dataset is 2,102. We compute the accuracy considering the k top similar words. TABLE II. ACCURACY (%) OF ANALOGY PREDICTION FOR TOP K SIMILAR WORD VECTORS FOR DIFFERENT WORD EMBEDDING DIMENSIONS 1 2 3 4 5 CB SG CB SG CB SG CB SG CB SG 300 22.79 28.12 32.35 40.25 38.44 45.39 43.43 48.76 46.62 51.33100 18.13 24.50 26.40 33.12 31.26 38.44 35.30 42.58 39.10 45.6764 18.13 20.23 26.40 28.12 31.26 32.45 35.30 36.63 39.01 39.6832 7.23 8.37 11.42 12.51 14.93 15.41 16.98 18.51 18.79 21.21As expected, as we increase k the accuracy increases. Instead of looking at the most similar word vector (k = 1) if we increase k to 5, we see the accuracy increases almost two-folds. For example, we found out that given three analogies, “ব াংকক” (Bangkok), “থাইল া ” (Thailand) and “ বই জং” (Beijing), the predicted closest word is “ইরান” (Iran) but the correct answer is “চীন” (China), which indeed is in the top k=5 closest words. We have also experiment with different values of k greater than 5. At k=7, we achieve 50.81% accuracy. And for k =14 the accuracy rises to 60.13%. After that, however, the rate of increase in accuracy is low. We also observe that Skipgram performs better than CBOW for all dimensions. Table III shows the results of semantic relatedness. The second row is the Pearson correlation and the third row represents the Spearman correlation. TABLE III. CORRELATION COEFFICIENTS OF SEMANTIC RELATEDNESS TASK FOR DIFFERENT WORD EMBEDDING DIMENSIONS d 300 100 64 32 Pearson CB SG CB SG CB SG CB SG 0.57 0.57 0.57 0.59 0.56 0.59 0.53 0.59Spearman 0.59 0.62 0.59 0.61 0.58 0.61</s>
|
<s>0.54 0.59Referring Table III, we also observe that the performance of CBOW and Skipgram models does not differ too much. Table IV represents the accuracies for Synonym and Antonym detection, where S represents synonym detection and A represents antonym detection task respectively. TABLE IV. ACCURACY (%) OF SYNONYM(S) AND ANTONYM (A) DETECTION TASKS FOR DIFFERENT WORD EMBEDDING DIMENSIONS d 300 100 64 32 CB SG CB SG CB SG CB SG 52.31 53.85 55.38 58.46 56.92 50.77 47.69 50.77A 10.29 5.88 9.56 9.56 8.82 8.82 8.82 17.65We observe that for Synonym and Antonym evaluations, the models fail when the number of occurrences (count) of the target words in the corpus is low. For example, in case of words such as “ কারক” (Bud, count: 39), “অংস” (Shoulder, count: 26), “গ েদশ” (Cheek, count: 31) the model fails to predict correctly. We find Skipgram there is no clear winner between Skipgram and CBOW models for these two tasks. Also, we observe higher accuracy for lower dimensional vectors for Skipgram. However, in general, determining synonyms and antonyms using word embeddings is a difficult task given the way distributional hypothesis works. This distributional hypothesis assumes that words that have the same context tend to have a similar meaning. But in natural languages, synonyms and antonyms of a word may appear in the same context, e.g. “রিহম ভােলা ছেল।” (Rahim is a good boy.) and “রিহম খারাপ ছেল।” (Rahim is a bad boy.). Here “ভােলা” (Good) and “খারাপ” (Bad) are antonyms of each other. But they appear in the same context. However, we still find that the model performs better on identifying synonyms than antonyms. This is likely due to the fact that it is more probable for a synonymous word to appear in the same context rather than an antonym. We perform the k-means clustering algorithm with k = 6. Figure I shows accuracy (%) obtained by the CBOW and Skipgram models with 4 dimension-sizes. Fig. 1. Accuracy (%) for Concept Categorization for different word embedding dimensions As mentioned before, we have six concepts. Among them we observe that words belonging to “মানিসক অব া” (mental state) and “শারীিরক য়া”(body action) categories get mixed up after performing clustering. For example, words such as “রাগ” (Angry) and “কা া” can be considered as belonging to both mental state and body action categories. As for the other clusters there is less ambiguity and the performance of models are relatively better as can be observed from the confusion matrix in Table V, for clusters 1 to 6.Table V represents the confusion matrices for concept categorization clusters where m, b, v, o, a & c respectively represent the 6 categories, mental state, body function, vehicle, occupation, animal & change of state. TABLE V. CONFUSION MATRICES FOR CONCEPT CATEGORIZATION FOR WORD EMBEDDING DIMENSION SIZE 300 USING CBOW AND SKIPGRAM MODELING. CBOW Predicted m b v o a c m 10 1 0 0 1 0 b 0 9 0 0 2 0 v 0 0 14 0 0</s>
|
<s>0 o 0 0 0 14 0 0 a 0 0 0 0 16 0 c 1 0 0 0 5 5 Skipgram Predicted m b v o a c m 11 1 0 0 0 0 b 0 11 0 0 0 0 v 0 0 14 0 0 0 o 0 0 1 13 0 0 a 0 0 0 0 16 0 c 0 5 0 0 0 6 From the confusion matrices in Table V, we observe that for 5 words from ‘change of state’ clustered with ‘body function’category for Skipgram. This is semantically true in Bangla language. But CBOW has clustered the words from ‘change of state’ category with the words related to ‘animal’ category, which is nonsensical. V. CONCLUSION In this paper, we proposed 5 intrinsic evaluation methods and the datasets for evaluating Bangla word embeddings. We wanted to evaluate the quality of our word embeddings by computing the accuracies obtained by experimenting with these evaluation methods. To our knowledge, our corpus dataset is by far the largest publicly available dataset for Bangla with 1,337,032 unique words. However, our corpus is more skewed towards blog posts with 84.48% of total sentences in the corpus which might be the cause of some performance issues. For analogy prediction, semantic relatedness, and concept categorization the word embedding methods perform moderately well. However, for synonym and antonym prediction accuracy is understandably poor as synonyms and antonyms appear on the same contexts. On the other hand, concept categorization does better than other tasks with average accuracy above 90%. Overall, we observed that Skipgram model performs comparatively better than CBOW model. In future, we would investigate whether the accuracy of the different tasks can be improved by increasing the size of our corpus and add diversity to it by adding more articles from newspapers, novels, and technical documents. We would also investigate performance of different embedding methods with different context window sizes. We have made the corpus and evaluation datasets publicly available and we believe this will instigate further research in the evaluation of Bangla word embedding. REFERENCES [1] Z. S. Harris, “Distributional structure”, in Word, 1964, 10(2-3):146–162 300 100 64 32CBOW SkipgramDimension (d) [2] T. Mikolov, I. Sutskever, K. Chen, G. S. Corado, and J. Dean, “Distributed representations of words and phrases and their compositionality”, in NIPS, 2013a, pages 3111-3119. [3] T. Mikolov, K. Chen, G. Corrado, and J. Dean, “Efficient estimation of word representations in vector space”, in Proc. ICLR, 2013. [4] J. Pennigton, R. Socher, C. D. Manning, “GloVe: Global Vectors for Word Representaion”, in Proc. EMNLP, 2014, doi: 10.3115/v1/D14-1162 [5] Z. Yin and Y. Shen, “On the Dimenstionality of Word Embedding”, in NIPS, 2018. [6] R. Lebret and R. Collobert, “Word embeddings through Hellinger PCA”, 2014, in EACL, pages 482–490. [7] P. S. Dhillon, J. Rodu, D. P. Foster, and L. H. Ungar, "Two step CCA: A new spectral method for estimating vector models of words", 2012, in ICML, pages 1551–1558. [8] P. Li, T. J.</s>
|
<s>Hastie, and K. W. Church, "Very sparse random projections", 2006, in KDD, pages 287–296. [9] T. Schnabel, I. Labutov, D. Mimmo, and T. Joachims, “Evaluation methods for unsupervised word embeddings”, in ACL, 2015, pages 298-307. [10] M. Baroni, G. Dinu, and G. Kruzewski, “Don’ Count, predict! a systematic comparison of context-counting vs. contex-predicting semantic vectors”, In ACL, 2014, pages 238-247. [11] Z. S. Ritu, N. Nowshin, M. M. H. Nahid, and S. Ismail, “Performanc Analysis of Different Word Embedding Models on Bangla Language”, in ICBSLP, Sept. 21-22, 2018, doi: 10.1109/ICBSLP.2018.8554681. [12] M. Aono and M. Shajalal, “Semantic Textual Similarity in Bengali Text”, in ICBSLP, Sept 21-22, 2018, doi: 10.1109/ICBSLP.2018.8554940. [13] H. A. Chowdhury, M. A. H. Imon ; M. S. Islam, “A Comparative Analysis of Word Embedding Representations in Authorship Attribution of Bengali Literature”, in ICCIT, Dec 21-23, 2018, doi: 10.1109/ICCITECHN.2018.8631977 [14] A. Ahmad, M. R. Amin, “Bengali word embeddings and it's application in solving document classification problem”, in ICCIT, Dec 18-20, 2016, doi: 10.1109/ICCITECHN.2016.7860236 [15] M. R. Hossain, M. M. Hoque, “Automatic Bengali Document Categorization Based on Word Embedding and Statistical Learning Approaches”, in IC4ME2, Feb 8-9, 2018, doi: 10.1109/IC4ME2.2018.8465632 [16] S. H. Sumit, M. Z. Hossan, T. A. Muntasir, T. Sourov, “Exploring Word Embedding for Bangla Sentiment Analysis”, in ICBSLP, Sept 21-22, 2018, doi: 10.1109/ICBSLP.2018.8554443 [17] M. Al-Amin, M. S. Islam, S. D. Uzzal, “Sentiment analysis of Bengali comments with Word2Vec and sentiment information of words”, in ECCE, Feb 16-18, 2017, doi: 10.1109/ECACE.2017.7912903 [18] M. N. Hasan, S. Bhowmik, M. M. Rahaman, “Multi-label sentence classification using Bengali word embedding model”, in EICT, Dec 7-9, 2017, doi: 10.1109/EICT.2017.8275207 [19] P. V. Veena, M. A.Kumar, K. P. Soman, “An effective way of word-level language identification for code-mixed facebook comments using word-embedding via character-embedding”, in ICACCI, Sept 13-16, 2017, doi: 10.1109/ICACCI.2017.8126062 [20] R. Karim, M.A.M. Islam, S.R. Simanto, S.A. Chowdhury, K. Roy, A.A. Neon, M.S. Hasan, A. Firoze, and R.M. Rahman, "A step towards information extraction: Named entity recognition in Bangla using deep learning", Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-13, Jul. 2019. doi: 10.3233/JIFS-179349 [21] Z. Shujon, “40k Bangla Newspaper Article”, Internet: https://www.kaggle.com/zshujon/40k-bangla-newspaper-article, Sept. 22, 2018. [22] G. K. Zipf, “The Psychobiology of Language”, Oxford, England: Houghton, Mifflin, 1935. View publication statsView publication statshttps://www.researchgate.net/publication/341158124 /ASCII85EncodePages false /AllowTransparency false /AutoPositionEPSFiles true /AutoRotatePages /None /Binding /Left /CalGrayProfile (Gray Gamma 2.2) /CalRGBProfile (sRGB IEC61966-2.1) /CalCMYKProfile (U.S. Web Coated \050SWOP\051 v2) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Error /CompatibilityLevel 1.7 /CompressObjects /Off /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages true /CreateJobTicket false /DefaultRenderingIntent /Default /DetectBlends true /DetectCurves 0.0000 /ColorConversionStrategy /LeaveColorUnchanged /DoThumbnails false /EmbedAllFonts true /EmbedOpenType false /ParseICCProfilesInComments true /EmbedJobOptions true /DSCReportingLevel 0 /EmitDSCWarnings false /EndPage -1 /ImageMemory 1048576 /LockDistillerParams true /MaxSubsetPct 100 /Optimize true /OPM 0 /ParseDSCComments false /ParseDSCCommentsForDocInfo true /PreserveCopyPage true /PreserveDICMYKValues true /PreserveEPSInfo false /PreserveFlatness true /PreserveHalftoneInfo true /PreserveOPIComments false /PreserveOverprintSettings true /StartPage 1 /SubsetFonts true /TransferFunctionInfo /Remove /UCRandBGInfo /Preserve /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true /AbadiMT-CondensedLight /ACaslon-Italic /ACaslon-Regular /ACaslon-Semibold /ACaslon-SemiboldItalic /AdobeArabic-Bold /AdobeArabic-BoldItalic /AdobeArabic-Italic /AdobeArabic-Regular /AdobeHebrew-Bold /AdobeHebrew-BoldItalic /AdobeHebrew-Italic /AdobeHebrew-Regular /AdobeHeitiStd-Regular</s>
|
<s>/AdobeMingStd-Light /AdobeMyungjoStd-Medium /AdobePiStd /AdobeSongStd-Light /AdobeThai-Bold /AdobeThai-BoldItalic /AdobeThai-Italic /AdobeThai-Regular /AGaramond-Bold /AGaramond-BoldItalic /AGaramond-Italic /AGaramond-Regular /AGaramond-Semibold /AGaramond-SemiboldItalic /AgencyFB-Bold /AgencyFB-Reg /AGOldFace-Outline /AharoniBold /Algerian /Americana /Americana-ExtraBold /AndaleMono /AndaleMonoIPA /AngsanaNew /AngsanaNew-Bold /AngsanaNew-BoldItalic /AngsanaNew-Italic /AngsanaUPC /AngsanaUPC-Bold /AngsanaUPC-BoldItalic /AngsanaUPC-Italic /Anna /ArialAlternative /ArialAlternativeSymbol /Arial-Black /Arial-BlackItalic /Arial-BoldItalicMT /Arial-BoldMT /Arial-ItalicMT /ArialMT /ArialMT-Black /ArialNarrow /ArialNarrow-Bold /ArialNarrow-BoldItalic /ArialNarrow-Italic /ArialRoundedMTBold /ArialUnicodeMS /ArrusBT-Bold /ArrusBT-BoldItalic /ArrusBT-Italic /ArrusBT-Roman /AvantGarde-Book /AvantGarde-BookOblique /AvantGarde-Demi /AvantGarde-DemiOblique /AvantGardeITCbyBT-Book /AvantGardeITCbyBT-BookOblique /BakerSignet /BankGothicBT-Medium /Barmeno-Bold /Barmeno-ExtraBold /Barmeno-Medium /Barmeno-Regular /Baskerville /BaskervilleBE-Italic /BaskervilleBE-Medium /BaskervilleBE-MediumItalic /BaskervilleBE-Regular /Baskerville-Bold /Baskerville-BoldItalic /Baskerville-Italic /BaskOldFace /Batang /BatangChe /Bauhaus93 /Bellevue /BellMT /BellMTBold /BellMTItalic /BerlingAntiqua-Bold /BerlingAntiqua-BoldItalic /BerlingAntiqua-Italic /BerlingAntiqua-Roman /BerlinSansFB-Bold /BerlinSansFBDemi-Bold /BerlinSansFB-Reg /BernardMT-Condensed /BernhardModernBT-Bold /BernhardModernBT-BoldItalic /BernhardModernBT-Italic /BernhardModernBT-Roman /BiffoMT /BinnerD /BinnerGothic /BlackadderITC-Regular /Blackoak /blex /blsy /Bodoni /Bodoni-Bold /Bodoni-BoldItalic /Bodoni-Italic /BodoniMT /BodoniMTBlack /BodoniMTBlack-Italic /BodoniMT-Bold /BodoniMT-BoldItalic /BodoniMTCondensed /BodoniMTCondensed-Bold /BodoniMTCondensed-BoldItalic /BodoniMTCondensed-Italic /BodoniMT-Italic /BodoniMTPosterCompressed /Bodoni-Poster /Bodoni-PosterCompressed /BookAntiqua /BookAntiqua-Bold /BookAntiqua-BoldItalic /BookAntiqua-Italic /Bookman-Demi /Bookman-DemiItalic /Bookman-Light /Bookman-LightItalic /BookmanOldStyle /BookmanOldStyle-Bold /BookmanOldStyle-BoldItalic /BookmanOldStyle-Italic /BookshelfSymbolOne-Regular /BookshelfSymbolSeven /BookshelfSymbolThree-Regular /BookshelfSymbolTwo-Regular /Botanical /Boton-Italic /Boton-Medium /Boton-MediumItalic /Boton-Regular /Boulevard /BradleyHandITC /Braggadocio /BritannicBold /Broadway /BrowalliaNew /BrowalliaNew-Bold /BrowalliaNew-BoldItalic /BrowalliaNew-Italic /BrowalliaUPC /BrowalliaUPC-Bold /BrowalliaUPC-BoldItalic /BrowalliaUPC-Italic /BrushScript /BrushScriptMT /CaflischScript-Bold /CaflischScript-Regular /Calibri /Calibri-Bold /Calibri-BoldItalic /Calibri-Italic /CalifornianFB-Bold /CalifornianFB-Italic /CalifornianFB-Reg /CalisMTBol /CalistoMT /CalistoMT-BoldItalic /CalistoMT-Italic /Cambria /Cambria-Bold /Cambria-BoldItalic /Cambria-Italic /CambriaMath /Candara /Candara-Bold /Candara-BoldItalic /Candara-Italic /Carta /CaslonOpenfaceBT-Regular /Castellar /CastellarMT /Centaur /Centaur-Italic /Century /CenturyGothic /CenturyGothic-Bold /CenturyGothic-BoldItalic /CenturyGothic-Italic /CenturySchL-Bold /CenturySchL-BoldItal /CenturySchL-Ital /CenturySchL-Roma /CenturySchoolbook /CenturySchoolbook-Bold /CenturySchoolbook-BoldItalic /CenturySchoolbook-Italic /CGTimes-Bold /CGTimes-BoldItalic /CGTimes-Italic /CGTimes-Regular /CharterBT-Bold /CharterBT-BoldItalic /CharterBT-Italic /CharterBT-Roman /CheltenhamITCbyBT-Bold /CheltenhamITCbyBT-BoldItalic /CheltenhamITCbyBT-Book /CheltenhamITCbyBT-BookItalic /Chiller-Regular /Cmb10 /CMB10 /Cmbsy10 /CMBSY10 /CMBSY5 /CMBSY6 /CMBSY7 /CMBSY8 /CMBSY9 /Cmbx10 /CMBX10 /Cmbx12 /CMBX12 /Cmbx5 /CMBX5 /Cmbx6 /CMBX6 /Cmbx7 /CMBX7 /Cmbx8 /CMBX8 /Cmbx9 /CMBX9 /Cmbxsl10 /CMBXSL10 /Cmbxti10 /CMBXTI10 /Cmcsc10 /CMCSC10 /Cmcsc8 /CMCSC8 /Cmcsc9 /CMCSC9 /Cmdunh10 /CMDUNH10 /Cmex10 /CMEX10 /CMEX7 /CMEX8 /CMEX9 /Cmff10 /CMFF10 /Cmfi10 /CMFI10 /Cmfib8 /CMFIB8 /Cminch /CMINCH /Cmitt10 /CMITT10 /Cmmi10 /CMMI10 /Cmmi12 /CMMI12 /Cmmi5 /CMMI5 /Cmmi6 /CMMI6 /Cmmi7 /CMMI7 /Cmmi8 /CMMI8 /Cmmi9 /CMMI9 /Cmmib10 /CMMIB10 /CMMIB5 /CMMIB6 /CMMIB7 /CMMIB8 /CMMIB9 /Cmr10 /CMR10 /Cmr12 /CMR12 /Cmr17 /CMR17 /Cmr5 /CMR5 /Cmr6 /CMR6 /Cmr7 /CMR7 /Cmr8 /CMR8 /Cmr9 /CMR9 /Cmsl10 /CMSL10 /Cmsl12 /CMSL12 /Cmsl8 /CMSL8 /Cmsl9 /CMSL9 /Cmsltt10 /CMSLTT10 /Cmss10 /CMSS10 /Cmss12 /CMSS12 /Cmss17 /CMSS17 /Cmss8 /CMSS8 /Cmss9 /CMSS9 /Cmssbx10 /CMSSBX10 /Cmssdc10 /CMSSDC10 /Cmssi10 /CMSSI10 /Cmssi12 /CMSSI12 /Cmssi17 /CMSSI17 /Cmssi8 /CMSSI8 /Cmssi9 /CMSSI9 /Cmssq8 /CMSSQ8 /Cmssqi8 /CMSSQI8 /Cmsy10 /CMSY10 /Cmsy5 /CMSY5 /Cmsy6 /CMSY6 /Cmsy7 /CMSY7 /Cmsy8 /CMSY8 /Cmsy9 /CMSY9 /Cmtcsc10 /CMTCSC10 /Cmtex10 /CMTEX10 /Cmtex8 /CMTEX8 /Cmtex9 /CMTEX9 /Cmti10 /CMTI10 /Cmti12 /CMTI12 /Cmti7 /CMTI7 /Cmti8 /CMTI8 /Cmti9 /CMTI9 /Cmtt10 /CMTT10 /Cmtt12 /CMTT12 /Cmtt8 /CMTT8 /Cmtt9 /CMTT9 /Cmu10 /CMU10 /Cmvtt10 /CMVTT10 /ColonnaMT /Colossalis-Bold /ComicSansMS /ComicSansMS-Bold /Consolas /Consolas-Bold /Consolas-BoldItalic /Consolas-Italic /Constantia /Constantia-Bold /Constantia-BoldItalic /Constantia-Italic /CooperBlack /CopperplateGothic-Bold /CopperplateGothic-Light /Copperplate-ThirtyThreeBC /Corbel /Corbel-Bold /Corbel-BoldItalic /Corbel-Italic /CordiaNew /CordiaNew-Bold /CordiaNew-BoldItalic /CordiaNew-Italic /CordiaUPC /CordiaUPC-Bold /CordiaUPC-BoldItalic /CordiaUPC-Italic /Courier /Courier-Bold /Courier-BoldOblique /CourierNewPS-BoldItalicMT /CourierNewPS-BoldMT /CourierNewPS-ItalicMT /CourierNewPSMT /Courier-Oblique /CourierStd /CourierStd-Bold /CourierStd-BoldOblique /CourierStd-Oblique /CourierX-Bold /CourierX-BoldOblique /CourierX-Oblique /CourierX-Regular /CreepyRegular /CurlzMT /David-Bold /David-Reg /DavidTransparent /Dcb10 /Dcbx10 /Dcbxsl10 /Dcbxti10 /Dccsc10 /Dcitt10 /Dcr10 /Desdemona /DilleniaUPC /DilleniaUPCBold /DilleniaUPCBoldItalic /DilleniaUPCItalic /Dingbats /DomCasual /Dotum /DotumChe /DoulosSIL /EdwardianScriptITC /Elephant-Italic /Elephant-Regular /EngraversGothicBT-Regular /EngraversMT /EraserDust /ErasITC-Bold /ErasITC-Demi /ErasITC-Light /ErasITC-Medium /ErieBlackPSMT /ErieLightPSMT /EriePSMT /EstrangeloEdessa /Euclid /Euclid-Bold /Euclid-BoldItalic /EuclidExtra /EuclidExtra-Bold /EuclidFraktur /EuclidFraktur-Bold /Euclid-Italic /EuclidMathOne /EuclidMathOne-Bold /EuclidMathTwo /EuclidMathTwo-Bold /EuclidSymbol /EuclidSymbol-Bold /EuclidSymbol-BoldItalic /EuclidSymbol-Italic /EucrosiaUPC /EucrosiaUPCBold /EucrosiaUPCBoldItalic /EucrosiaUPCItalic /EUEX10 /EUEX7 /EUEX8 /EUEX9 /EUFB10 /EUFB5 /EUFB7 /EUFM10 /EUFM5 /EUFM7 /EURB10 /EURB5 /EURB7 /EURM10 /EURM5 /EURM7 /EuroMono-Bold /EuroMono-BoldItalic /EuroMono-Italic /EuroMono-Regular /EuroSans-Bold /EuroSans-BoldItalic /EuroSans-Italic /EuroSans-Regular /EuroSerif-Bold /EuroSerif-BoldItalic /EuroSerif-Italic /EuroSerif-Regular</s>
|
<s>/EUSB10 /EUSB5 /EUSB7 /EUSM10 /EUSM5 /EUSM7 /FelixTitlingMT /Fences /FencesPlain /FigaroMT /FixedMiriamTransparent /FootlightMTLight /Formata-Italic /Formata-Medium /Formata-MediumItalic /Formata-Regular /ForteMT /FranklinGothic-Book /FranklinGothic-BookItalic /FranklinGothic-Demi /FranklinGothic-DemiCond /FranklinGothic-DemiItalic /FranklinGothic-Heavy /FranklinGothic-HeavyItalic /FranklinGothicITCbyBT-Book /FranklinGothicITCbyBT-BookItal /FranklinGothicITCbyBT-Demi /FranklinGothicITCbyBT-DemiItal /FranklinGothic-Medium /FranklinGothic-MediumCond /FranklinGothic-MediumItalic /FrankRuehl /FreesiaUPC /FreesiaUPCBold /FreesiaUPCBoldItalic /FreesiaUPCItalic /FreestyleScript-Regular /FrenchScriptMT /Frutiger-Black /Frutiger-BlackCn /Frutiger-BlackItalic /Frutiger-Bold /Frutiger-BoldCn /Frutiger-BoldItalic /Frutiger-Cn /Frutiger-ExtraBlackCn /Frutiger-Italic /Frutiger-Light /Frutiger-LightCn /Frutiger-LightItalic /Frutiger-Roman /Frutiger-UltraBlack /Futura-Bold /Futura-BoldOblique /Futura-Book /Futura-BookOblique /FuturaBT-Bold /FuturaBT-BoldItalic /FuturaBT-Book /FuturaBT-BookItalic /FuturaBT-Medium /FuturaBT-MediumItalic /Futura-Light /Futura-LightOblique /GalliardITCbyBT-Bold /GalliardITCbyBT-BoldItalic /GalliardITCbyBT-Italic /GalliardITCbyBT-Roman /Garamond /Garamond-Bold /Garamond-BoldCondensed /Garamond-BoldCondensedItalic /Garamond-BoldItalic /Garamond-BookCondensed /Garamond-BookCondensedItalic /Garamond-Italic /Garamond-LightCondensed /Garamond-LightCondensedItalic /Gautami /GeometricSlab703BT-Light /GeometricSlab703BT-LightItalic /Georgia /Georgia-Bold /Georgia-BoldItalic /Georgia-Italic /GeorgiaRef /Giddyup /Giddyup-Thangs /Gigi-Regular /GillSans /GillSans-Bold /GillSans-BoldItalic /GillSans-Condensed /GillSans-CondensedBold /GillSans-Italic /GillSans-Light /GillSans-LightItalic /GillSansMT /GillSansMT-Bold /GillSansMT-BoldItalic /GillSansMT-Condensed /GillSansMT-ExtraCondensedBold /GillSansMT-Italic /GillSans-UltraBold /GillSans-UltraBoldCondensed /GloucesterMT-ExtraCondensed /Gothic-Thirteen /GoudyOldStyleBT-Bold /GoudyOldStyleBT-BoldItalic /GoudyOldStyleBT-Italic /GoudyOldStyleBT-Roman /GoudyOldStyleT-Bold /GoudyOldStyleT-Italic /GoudyOldStyleT-Regular /GoudyStout /GoudyTextMT-LombardicCapitals /GSIDefaultSymbols /Gulim /GulimChe /Gungsuh /GungsuhChe /Haettenschweiler /HarlowSolid /Harrington /Helvetica /Helvetica-Black /Helvetica-BlackOblique /Helvetica-Bold /Helvetica-BoldOblique /Helvetica-Condensed /Helvetica-Condensed-Black /Helvetica-Condensed-BlackObl /Helvetica-Condensed-Bold /Helvetica-Condensed-BoldObl /Helvetica-Condensed-Light /Helvetica-Condensed-LightObl /Helvetica-Condensed-Oblique /Helvetica-Fraction /Helvetica-Narrow /Helvetica-Narrow-Bold /Helvetica-Narrow-BoldOblique /Helvetica-Narrow-Oblique /Helvetica-Oblique /HighTowerText-Italic /HighTowerText-Reg /Humanist521BT-BoldCondensed /Humanist521BT-Light /Humanist521BT-LightItalic /Humanist521BT-RomanCondensed /Imago-ExtraBold /Impact /ImprintMT-Shadow /InformalRoman-Regular /IrisUPC /IrisUPCBold /IrisUPCBoldItalic /IrisUPCItalic /Ironwood /ItcEras-Medium /ItcKabel-Bold /ItcKabel-Book /ItcKabel-Demi /ItcKabel-Medium /ItcKabel-Ultra /JasmineUPC /JasmineUPC-Bold /JasmineUPC-BoldItalic /JasmineUPC-Italic /JoannaMT /JoannaMT-Italic /Jokerman-Regular /JuiceITC-Regular /Kartika /Kaufmann /KaufmannBT-Bold /KaufmannBT-Regular /KidTYPEPaint /KinoMT /KodchiangUPC /KodchiangUPC-Bold /KodchiangUPC-BoldItalic /KodchiangUPC-Italic /KorinnaITCbyBT-Regular /KristenITC-Regular /KrutiDev040Bold /KrutiDev040BoldItalic /KrutiDev040Condensed /KrutiDev040Italic /KrutiDev040Thin /KrutiDev040Wide /KrutiDev060 /KrutiDev060Bold /KrutiDev060BoldItalic /KrutiDev060Condensed /KrutiDev060Italic /KrutiDev060Thin /KrutiDev060Wide /KrutiDev070 /KrutiDev070Condensed /KrutiDev070Italic /KrutiDev070Thin /KrutiDev070Wide /KrutiDev080 /KrutiDev080Condensed /KrutiDev080Italic /KrutiDev080Wide /KrutiDev090 /KrutiDev090Bold /KrutiDev090BoldItalic /KrutiDev090Condensed /KrutiDev090Italic /KrutiDev090Thin /KrutiDev090Wide /KrutiDev100 /KrutiDev100Bold /KrutiDev100BoldItalic /KrutiDev100Condensed /KrutiDev100Italic /KrutiDev100Thin /KrutiDev100Wide /KrutiDev120 /KrutiDev120Condensed /KrutiDev120Thin /KrutiDev120Wide /KrutiDev130 /KrutiDev130Condensed /KrutiDev130Thin /KrutiDev130Wide /KunstlerScript /Latha /LatinWide /LetterGothic /LetterGothic-Bold /LetterGothic-BoldOblique /LetterGothic-BoldSlanted /LetterGothicMT /LetterGothicMT-Bold /LetterGothicMT-BoldOblique /LetterGothicMT-Oblique /LetterGothic-Slanted /LevenimMT /LevenimMTBold /LilyUPC /LilyUPCBold /LilyUPCBoldItalic /LilyUPCItalic /Lithos-Black /Lithos-Regular /LotusWPBox-Roman /LotusWPIcon-Roman /LotusWPIntA-Roman /LotusWPIntB-Roman /LotusWPType-Roman /LucidaBright /LucidaBright-Demi /LucidaBright-DemiItalic /LucidaBright-Italic /LucidaCalligraphy-Italic /LucidaConsole /LucidaFax /LucidaFax-Demi /LucidaFax-DemiItalic /LucidaFax-Italic /LucidaHandwriting-Italic /LucidaSans /LucidaSans-Demi /LucidaSans-DemiItalic /LucidaSans-Italic /LucidaSans-Typewriter /LucidaSans-TypewriterBold /LucidaSans-TypewriterBoldOblique /LucidaSans-TypewriterOblique /LucidaSansUnicode /Lydian /Magneto-Bold /MaiandraGD-Regular /Mangal-Regular /Map-Symbols /MathA /MathB /MathC /Mathematica1 /Mathematica1-Bold /Mathematica1Mono /Mathematica1Mono-Bold /Mathematica2 /Mathematica2-Bold /Mathematica2Mono /Mathematica2Mono-Bold /Mathematica3 /Mathematica3-Bold /Mathematica3Mono /Mathematica3Mono-Bold /Mathematica4 /Mathematica4-Bold /Mathematica4Mono /Mathematica4Mono-Bold /Mathematica5 /Mathematica5-Bold /Mathematica5Mono /Mathematica5Mono-Bold /Mathematica6 /Mathematica6Bold /Mathematica6Mono /Mathematica6MonoBold /Mathematica7 /Mathematica7Bold /Mathematica7Mono /Mathematica7MonoBold /MatisseITC-Regular /MaturaMTScriptCapitals /Mesquite /Mezz-Black /Mezz-Regular /MICR /MicrosoftSansSerif /MingLiU /Minion-BoldCondensed /Minion-BoldCondensedItalic /Minion-Condensed /Minion-CondensedItalic /Minion-Ornaments /MinionPro-Bold /MinionPro-BoldIt /MinionPro-It /MinionPro-Regular /Miriam /MiriamFixed /MiriamTransparent /Mistral /Modern-Regular /MonotypeCorsiva /MonotypeSorts /MSAM10 /MSAM5 /MSAM6 /MSAM7 /MSAM8 /MSAM9 /MSBM10 /MSBM5 /MSBM6 /MSBM7 /MSBM8 /MSBM9 /MS-Gothic /MSHei /MSLineDrawPSMT /MS-Mincho /MSOutlook /MS-PGothic /MS-PMincho /MSReference1 /MSReference2 /MSReferenceSansSerif /MSReferenceSansSerif-Bold /MSReferenceSansSerif-BoldItalic /MSReferenceSansSerif-Italic /MSReferenceSerif /MSReferenceSerif-Bold /MSReferenceSerif-BoldItalic /MSReferenceSerif-Italic /MSReferenceSpecialty /MSSong /MS-UIGothic /MT-Extra /MTExtraTiger /MT-Symbol /MT-Symbol-Italic /MVBoli /Myriad-Bold /Myriad-BoldItalic /Myriad-Italic /Myriad-Roman /Narkisim /NewCenturySchlbk-Bold /NewCenturySchlbk-BoldItalic /NewCenturySchlbk-Italic /NewCenturySchlbk-Roman /NewMilleniumSchlbk-BoldItalicSH /NewsGothic /NewsGothic-Bold /NewsGothicBT-Bold /NewsGothicBT-BoldItalic /NewsGothicBT-Italic /NewsGothicBT-Roman /NewsGothic-Condensed /NewsGothic-Italic /NewsGothicMT /NewsGothicMT-Bold /NewsGothicMT-Italic /NiagaraEngraved-Reg /NiagaraSolid-Reg /NimbusMonL-Bold /NimbusMonL-BoldObli /NimbusMonL-Regu /NimbusMonL-ReguObli /NimbusRomNo9L-Medi /NimbusRomNo9L-MediItal /NimbusRomNo9L-Regu /NimbusRomNo9L-ReguItal /NimbusSanL-Bold /NimbusSanL-BoldCond /NimbusSanL-BoldCondItal /NimbusSanL-BoldItal /NimbusSanL-Regu /NimbusSanL-ReguCond /NimbusSanL-ReguCondItal /NimbusSanL-ReguItal /Nimrod /Nimrod-Bold /Nimrod-BoldItalic /Nimrod-Italic /NSimSun /Nueva-BoldExtended /Nueva-BoldExtendedItalic /Nueva-Italic /Nueva-Roman /NuptialScript /OCRA /OCRA-Alternate /OCRAExtended /OCRB /OCRB-Alternate /OfficinaSans-Bold /OfficinaSans-BoldItalic /OfficinaSans-Book /OfficinaSans-BookItalic /OfficinaSerif-Bold /OfficinaSerif-BoldItalic /OfficinaSerif-Book /OfficinaSerif-BookItalic /OldEnglishTextMT /Onyx /OnyxBT-Regular /OzHandicraftBT-Roman /PalaceScriptMT /Palatino-Bold /Palatino-BoldItalic /Palatino-Italic /PalatinoLinotype-Bold /PalatinoLinotype-BoldItalic /PalatinoLinotype-Italic /PalatinoLinotype-Roman /Palatino-Roman /PapyrusPlain /Papyrus-Regular /Parchment-Regular /Parisian /ParkAvenue /Penumbra-SemiboldFlare /Penumbra-SemiboldSans /Penumbra-SemiboldSerif /PepitaMT /Perpetua /Perpetua-Bold /Perpetua-BoldItalic /Perpetua-Italic /PerpetuaTitlingMT-Bold /PerpetuaTitlingMT-Light /PhotinaCasualBlack /Playbill /PMingLiU /Poetica-SuppOrnaments /PoorRichard-Regular /PopplLaudatio-Italic /PopplLaudatio-Medium /PopplLaudatio-MediumItalic /PopplLaudatio-Regular /PrestigeElite /Pristina-Regular /PTBarnumBT-Regular /Raavi /RageItalic /Ravie /RefSpecialty /Ribbon131BT-Bold /Rockwell /Rockwell-Bold /Rockwell-BoldItalic /Rockwell-Condensed /Rockwell-CondensedBold /Rockwell-ExtraBold /Rockwell-Italic /Rockwell-Light /Rockwell-LightItalic /Rod /RodTransparent /RunicMT-Condensed /Sanvito-Light /Sanvito-Roman /ScriptC /ScriptMTBold /SegoeUI /SegoeUI-Bold /SegoeUI-BoldItalic /SegoeUI-Italic /Serpentine-BoldOblique /ShelleyVolanteBT-Regular /ShowcardGothic-Reg</s>
|
<s>/Shruti /SILDoulosIPA /SimHei /SimSun /SimSun-PUA /SnapITC-Regular /StandardSymL /Stencil /StoneSans /StoneSans-Bold /StoneSans-BoldItalic /StoneSans-Italic /StoneSans-Semibold /StoneSans-SemiboldItalic /Stop /Swiss721BT-BlackExtended /Sylfaen /Symbol /SymbolMT /SymbolTiger /SymbolTigerExpert /Tahoma /Tahoma-Bold /Tci1 /Tci1Bold /Tci1BoldItalic /Tci1Italic /Tci2 /Tci2Bold /Tci2BoldItalic /Tci2Italic /Tci3 /Tci3Bold /Tci3BoldItalic /Tci3Italic /Tci4 /Tci4Bold /Tci4BoldItalic /Tci4Italic /TechnicalItalic /TechnicalPlain /Tekton /Tekton-Bold /TektonMM /Tempo-HeavyCondensed /Tempo-HeavyCondensedItalic /TempusSansITC /Tiger /TigerExpert /Times-Bold /Times-BoldItalic /Times-BoldItalicOsF /Times-BoldSC /Times-ExtraBold /Times-Italic /Times-ItalicOsF /TimesNewRomanMT-ExtraBold /TimesNewRomanPS-BoldItalicMT /TimesNewRomanPS-BoldMT /TimesNewRomanPS-ItalicMT /TimesNewRomanPSMT /Times-Roman /Times-RomanSC /Trajan-Bold /Trebuchet-BoldItalic /TrebuchetMS /TrebuchetMS-Bold /TrebuchetMS-Italic /Tunga-Regular /TwCenMT-Bold /TwCenMT-BoldItalic /TwCenMT-Condensed /TwCenMT-CondensedBold /TwCenMT-CondensedExtraBold /TwCenMT-CondensedMedium /TwCenMT-Italic /TwCenMT-Regular /Univers-Bold /Univers-BoldItalic /UniversCondensed-Bold /UniversCondensed-BoldItalic /UniversCondensed-Medium /UniversCondensed-MediumItalic /Univers-Medium /Univers-MediumItalic /URWBookmanL-DemiBold /URWBookmanL-DemiBoldItal /URWBookmanL-Ligh /URWBookmanL-LighItal /URWChanceryL-MediItal /URWGothicL-Book /URWGothicL-BookObli /URWGothicL-Demi /URWGothicL-DemiObli /URWPalladioL-Bold /URWPalladioL-BoldItal /URWPalladioL-Ital /URWPalladioL-Roma /USPSBarCode /VAGRounded-Black /VAGRounded-Bold /VAGRounded-Light /VAGRounded-Thin /Verdana /Verdana-Bold /Verdana-BoldItalic /Verdana-Italic /VerdanaRef /VinerHandITC /Viva-BoldExtraExtended /Vivaldii /Viva-LightCondensed /Viva-Regular /VladimirScript /Vrinda /Webdings /Westminster /Willow /Wingdings2 /Wingdings3 /Wingdings-Regular /WNCYB10 /WNCYI10 /WNCYR10 /WNCYSC10 /WNCYSS10 /WoodtypeOrnaments-One /WoodtypeOrnaments-Two /WP-ArabicScriptSihafa /WP-ArabicSihafa /WP-BoxDrawing /WP-CyrillicA /WP-CyrillicB /WP-GreekCentury /WP-GreekCourier /WP-GreekHelve /WP-HebrewDavid /WP-IconicSymbolsA /WP-IconicSymbolsB /WP-Japanese /WP-MathA /WP-MathB /WP-MathExtendedA /WP-MathExtendedB /WP-MultinationalAHelve /WP-MultinationalARoman /WP-MultinationalBCourier /WP-MultinationalBHelve /WP-MultinationalBRoman /WP-MultinationalCourier /WP-Phonetic /WPTypographicSymbols /XYATIP10 /XYBSQL10 /XYBTIP10 /XYCIRC10 /XYCMAT10 /XYCMBT10 /XYDASH10 /XYEUAT10 /XYEUBT10 /ZapfChancery-MediumItalic /ZapfDingbats /ZapfHumanist601BT-Bold /ZapfHumanist601BT-BoldItalic /ZapfHumanist601BT-Demi /ZapfHumanist601BT-DemiItalic /ZapfHumanist601BT-Italic /ZapfHumanist601BT-Roman /ZWAdobeF /NeverEmbed [ true /AntiAliasColorImages false /CropColorImages true /ColorImageMinResolution 150 /ColorImageMinResolutionPolicy /OK /DownsampleColorImages true /ColorImageDownsampleType /Bicubic /ColorImageResolution 300 /ColorImageDepth -1 /ColorImageMinDownsampleDepth 1 /ColorImageDownsampleThreshold 2.00333 /EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages true /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /ColorImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 150 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 2.00333 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /GrayImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 1200 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 600 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.00167 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1 /AllowPSXObjects false /CheckCompliance [ /None /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXOutputIntentProfile (None) /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False /CreateJDFFile false /Description << /ARA <FEFF06270633062A062E062F0645002006470630064700200627064406250639062F0627062F0627062A002006440625064606340627062100200648062B062706260642002000410064006F00620065002000500044004600200645062A064806270641064206290020064506390020064506420627064A064A0633002006390631063600200648063706280627063906290020062706440648062B0627062606420020062706440645062A062F062706480644062900200641064A00200645062C062706440627062A002006270644062306390645062706440020062706440645062E062A064406410629061B0020064A06450643064600200641062A062D00200648062B0627062606420020005000440046002006270644064506460634062306290020062806270633062A062E062F062706450020004100630072006F0062006100740020064800410064006F006200650020005200650061006400650072002006250635062F0627063100200035002E0030002006480627064406250635062F062706310627062A0020062706440623062D062F062B002E> /CHS <FEFF4f7f75288fd94e9b8bbe5b9a521b5efa7684002000410064006f006200650020005000440046002065876863900275284e8e55464e1a65876863768467e5770b548c62535370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c676562535f00521b5efa768400200050004400460020658768633002> /CHT <FEFF4f7f752890194e9b8a2d7f6e5efa7acb7684002000410064006f006200650020005000440046002065874ef69069752865bc666e901a554652d965874ef6768467e5770b548c52175370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c4f86958b555f5df25efa7acb76840020005000440046002065874ef63002> /CZE <FEFF005400610074006f0020006e006100730074006100760065006e00ed00200070006f0075017e0069006a007400650020006b0020007600790074007600e101590065006e00ed00200064006f006b0075006d0065006e0074016f002000410064006f006200650020005000440046002000760068006f0064006e00fd00630068002000700072006f002000730070006f006c00650068006c0069007600e90020007a006f006200720061007a006f007600e1006e00ed002000610020007400690073006b0020006f006200630068006f0064006e00ed0063006800200064006f006b0075006d0065006e0074016f002e002000200056007900740076006f01590065006e00e900200064006f006b0075006d0065006e007400790020005000440046002000620075006400650020006d006f017e006e00e90020006f007400650076015900ed007400200076002000700072006f006700720061006d0065006300680020004100630072006f00620061007400200061002000410064006f00620065002000520065006100640065007200200035002e0030002000610020006e006f0076011b006a016100ed00630068002e> /DAN <FEFF004200720075006700200069006e0064007300740069006c006c0069006e006700650072006e0065002000740069006c0020006100740020006f007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400650072002c0020006400650072002000650067006e006500720020007300690067002000740069006c00200064006500740061006c006a006500720065007400200073006b00e60072006d007600690073006e0069006e00670020006f00670020007500640073006b007200690076006e0069006e006700200061006600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020004400650020006f007000720065007400740065006400650020005000440046002d0064006f006b0075006d0065006e0074006500720020006b0061006e002000e50062006e00650073002000690020004100630072006f00620061007400200065006c006c006500720020004100630072006f006200610074002000520065006100640065007200200035002e00300020006f00670020006e0079006500720065002e> /DEU <FEFF00560065007200770065006e00640065006e0020005300690065002000640069006500730065002000450069006e007300740065006c006c0075006e00670065006e0020007a0075006d002000450072007300740065006c006c0065006e00200076006f006e002000410064006f006200650020005000440046002d0044006f006b0075006d0065006e00740065006e002c00200075006d002000650069006e00650020007a0075007600650072006c00e40073007300690067006500200041006e007a006500690067006500200075006e00640020004100750073006700610062006500200076006f006e00200047006500730063006800e40066007400730064006f006b0075006d0065006e00740065006e0020007a0075002000650072007a00690065006c0065006e002e00200044006900650020005000440046002d0044006f006b0075006d0065006e007400650020006b00f6006e006e0065006e0020006d006900740020004100630072006f00620061007400200075006e0064002000520065006100640065007200200035002e003000200075006e00640020006800f600680065007200200067006500f600660066006e00650074002000770065007200640065006e002e> /ESP <FEFF005500740069006c0069006300650020006500730074006100200063006f006e0066006900670075007200610063006900f3006e0020007000610072006100200063007200650061007200200064006f00630075006d0065006e0074006f0073002000640065002000410064006f00620065002000500044004600200061006400650063007500610064006f007300200070006100720061002000760069007300750061006c0069007a00610063006900f3006e0020006500200069006d0070007200650073006900f3006e00200064006500200063006f006e006600690061006e007a006100200064006500200064006f00630075006d0065006e0074006f007300200063006f006d00650072006300690061006c00650073002e002000530065002000700075006500640065006e00200061006200720069007200200064006f00630075006d0065006e0074006f00730020005000440046002000630072006500610064006f007300200063006f006e0020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e003000200079002000760065007200730069006f006e0065007300200070006f00730074006500720069006f007200650073002e> /FRA <FEFF005500740069006c006900730065007a00200063006500730020006f007000740069006f006e00730020006100660069006e00200064006500200063007200e900650072002000640065007300200064006f00630075006d0065006e00740073002000410064006f006200650020005000440046002000700072006f00660065007300730069006f006e006e0065006c007300200066006900610062006c0065007300200070006f007500720020006c0061002000760069007300750061006c00690073006100740069006f006e0020006500740020006c00270069006d007000720065007300730069006f006e002e0020004c0065007300200064006f00630075006d0065006e00740073002000500044004600200063007200e900e90073002000700065007500760065006e0074002000ea0074007200650020006f007500760065007200740073002000640061006e00730020004100630072006f006200610074002c002000610069006e00730069002000710075002700410064006f00620065002000520065006100640065007200200035002e0030002000650074002000760065007200730069006f006e007300200075006c007400e90072006900650075007200650073002e> /GRE <FEFF03a703c103b703c303b903bc03bf03c003bf03b903ae03c303c403b5002003b103c503c403ad03c2002003c403b903c2002003c103c503b803bc03af03c303b503b903c2002003b303b903b1002003bd03b1002003b403b703bc03b903bf03c503c103b303ae03c303b503c403b5002003ad03b303b303c103b103c603b1002000410064006f006200650020005000440046002003ba03b103c403ac03bb03bb03b703bb03b1002003b303b903b1002003b103be03b903cc03c003b903c303c403b7002003c003c103bf03b203bf03bb03ae002003ba03b103b9002003b503ba03c403cd03c003c903c303b7002003b503c003b903c703b503b903c103b703bc03b103c403b903ba03ce03bd002003b503b303b303c103ac03c603c903bd002e0020002003a403b10020005000440046002003ad03b303b303c103b103c603b1002003c003bf03c5002003ad03c703b503c403b5002003b403b703bc03b903bf03c503c103b303ae03c303b503b9002003bc03c003bf03c103bf03cd03bd002003bd03b1002003b103bd03bf03b903c703c403bf03cd03bd002003bc03b5002003c403bf0020004100630072006f006200610074002c002003c403bf002000410064006f00620065002000520065006100640065007200200035002e0030002003ba03b103b9002003bc03b503c403b103b303b503bd03ad03c303c403b503c103b503c2002003b503ba03b403cc03c303b503b903c2002e> /HEB <FEFF05D405E905EA05DE05E905D5002005D105D405D205D305E805D505EA002005D005DC05D4002005DB05D305D9002005DC05D905E605D505E8002005DE05E105DE05DB05D9002000410064006F006200650020005000440046002005E205D105D505E8002005D405E605D205D4002005D505D405D305E405E105D4002005D005DE05D905E005D4002005E905DC002005DE05E105DE05DB05D905DD002005E205E105E705D905D905DD002E002005DE05E105DE05DB05D90020005000440046002005E905E005D505E605E805D5002005E005D905EA05E005D905DD002005DC05E405EA05D905D705D4002005D105D005DE05E605E205D505EA0020004100630072006F006200610074002005D5002D00410064006F00620065002000520065006100640065007200200035002E0030002005D505D205E805E105D005D505EA002005DE05EA05E705D305DE05D505EA002005D905D505EA05E8002E05D905D505EA05E8002E002D0033002C002005E205D905D905E005D5002005D105DE05D305E805D905DA002005DC05DE05E905EA05DE05E9002005E905DC0020004100630072006F006200610074002E002005DE05E105DE05DB05D90020005000440046002005E905E005D505E605E805D5002005E005D905EA05E005D905DD002005DC05E405EA05D905D705D4002005D105D005DE05E605E205D505EA0020004100630072006F006200610074002005D5002D00410064006F00620065002000520065006100640065007200200035002E0030002005D505D205E805E105D005D505EA002005DE05EA05E705D305DE05D505EA002005D905D505EA05E8002E> /HRV (Za stvaranje Adobe PDF dokumenata pogodnih za pouzdani prikaz i ispis poslovnih dokumenata koristite ove postavke. Stvoreni PDF dokumenti mogu se otvoriti Acrobat i Adobe Reader 5.0 i kasnijim verzijama.) /HUN <FEFF00410020006800690076006100740061006c006f007300200064006f006b0075006d0065006e00740075006d006f006b0020006d00650067006200ed007a00680061007400f30020006d0065006700740065006b0069006e007400e9007300e900720065002000e900730020006e0079006f006d00740061007400e1007300e10072006100200073007a00e1006e0074002000410064006f00620065002000500044004600200064006f006b0075006d0065006e00740075006d006f006b0061007400200065007a0065006b006b0065006c0020006100200062006500e1006c006c00ed007400e10073006f006b006b0061006c00200068006f007a006800610074006a00610020006c00e9007400720065002e0020002000410020006c00e90074007200650068006f007a006f00740074002000500044004600200064006f006b0075006d0065006e00740075006d006f006b00200061007a0020004100630072006f006200610074002000e9007300200061007a002000410064006f00620065002000520065006100640065007200200035002e0030002c0020007600610067007900200061007a002000610074007400f3006c0020006b00e9007301510062006200690020007600650072007a006900f3006b006b0061006c0020006e00790069007400680061007400f3006b0020006d00650067002e> /ITA (Utilizzare queste impostazioni per creare documenti Adobe PDF adatti per visualizzare e stampare documenti aziendali in modo affidabile. I documenti PDF creati possono essere aperti con Acrobat e Adobe Reader 5.0 e versioni successive.) /JPN <FEFF30d330b830cd30b9658766f8306e8868793a304a3088307353705237306b90693057305f002000410064006f0062006500200050004400460020658766f8306e4f5c6210306b4f7f75283057307e305930023053306e8a2d5b9a30674f5c62103055308c305f0020005000440046002030d530a130a430eb306f3001004100630072006f0062006100740020304a30883073002000410064006f00620065002000520065006100640065007200200035002e003000204ee5964d3067958b304f30533068304c3067304d307e305930023053306e8a2d5b9a3067306f30d530a930f330c8306e57cb30818fbc307f3092884c3044307e30593002> /KOR <FEFFc7740020c124c815c7440020c0acc6a9d558c5ec0020be44c988b2c8c2a40020bb38c11cb97c0020c548c815c801c73cb85c0020bcf4ace00020c778c1c4d558b2940020b3700020ac00c7a50020c801d569d55c002000410064006f0062006500200050004400460020bb38c11cb97c0020c791c131d569b2c8b2e4002e0020c774b807ac8c0020c791c131b41c00200050004400460020bb38c11cb2940020004100630072006f0062006100740020bc0f002000410064006f00620065002000520065006100640065007200200035002e00300020c774c0c1c5d0c11c0020c5f40020c2180020c788c2b5b2c8b2e4002e> /NLD (Gebruik deze instellingen om Adobe PDF-documenten te maken waarmee zakelijke documenten betrouwbaar kunnen worden weergegeven en afgedrukt. De gemaakte PDF-documenten kunnen worden geopend</s>
|
<s>met Acrobat en Adobe Reader 5.0 en hoger.) /NOR <FEFF004200720075006b00200064006900730073006500200069006e006e007300740069006c006c0069006e00670065006e0065002000740069006c002000e50020006f0070007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e00740065007200200073006f006d002000650072002000650067006e0065007400200066006f00720020007000e5006c006900740065006c006900670020007600690073006e0069006e00670020006f00670020007500740073006b007200690066007400200061007600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020005000440046002d0064006f006b0075006d0065006e00740065006e00650020006b0061006e002000e50070006e00650073002000690020004100630072006f00620061007400200065006c006c00650072002000410064006f00620065002000520065006100640065007200200035002e003000200065006c006c00650072002e> /POL <FEFF0055007300740061007700690065006e0069006100200064006f002000740077006f0072007a0065006e0069006100200064006f006b0075006d0065006e007400f300770020005000440046002000700072007a0065007a006e00610063007a006f006e00790063006800200064006f0020006e00690065007a00610077006f0064006e00650067006f002000770079015b0077006900650074006c0061006e00690061002000690020006400720075006b006f00770061006e0069006100200064006f006b0075006d0065006e007400f300770020006600690072006d006f0077007900630068002e002000200044006f006b0075006d0065006e0074007900200050004400460020006d006f017c006e00610020006f007400770069006500720061010700200077002000700072006f006700720061006d006900650020004100630072006f00620061007400200069002000410064006f00620065002000520065006100640065007200200035002e0030002000690020006e006f00770073007a0079006d002e> /PTB <FEFF005500740069006c0069007a006500200065007300730061007300200063006f006e00660069006700750072006100e700f50065007300200064006500200066006f0072006d00610020006100200063007200690061007200200064006f00630075006d0065006e0074006f0073002000410064006f00620065002000500044004600200061006400650071007500610064006f00730020007000610072006100200061002000760069007300750061006c0069007a006100e700e3006f002000650020006100200069006d0070007200650073007300e3006f00200063006f006e0066006900e1007600650069007300200064006500200064006f00630075006d0065006e0074006f007300200063006f006d0065007200630069006100690073002e0020004f007300200064006f00630075006d0065006e0074006f00730020005000440046002000630072006900610064006f007300200070006f00640065006d0020007300650072002000610062006500720074006f007300200063006f006d0020006f0020004100630072006f006200610074002000650020006f002000410064006f00620065002000520065006100640065007200200035002e0030002000650020007600650072007300f50065007300200070006f00730074006500720069006f007200650073002e> /RUM <FEFF005500740069006c0069007a00610163006900200061006300650073007400650020007300650074010300720069002000700065006e007400720075002000610020006300720065006100200064006f00630075006d0065006e00740065002000410064006f006200650020005000440046002000610064006500630076006100740065002000700065006e007400720075002000760069007a00750061006c0069007a00610072006500610020015f006900200074006900700103007200690072006500610020006c0061002000630061006c006900740061007400650020007300750070006500720069006f0061007201030020006100200064006f00630075006d0065006e00740065006c006f007200200064006500200061006600610063006500720069002e002000200044006f00630075006d0065006e00740065006c00650020005000440046002000630072006500610074006500200070006f00740020006600690020006400650073006300680069007300650020006300750020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e00300020015f00690020007600650072007300690075006e0069006c006500200075006c0074006500720069006f006100720065002e> /RUS <FEFF04180441043f043e043b044c04370443043904420435002004340430043d043d044b04350020043d0430044104420440043e0439043a043800200434043b044f00200441043e043704340430043d0438044f00200434043e043a0443043c0435043d0442043e0432002000410064006f006200650020005000440046002c0020043f043e04340445043e0434044f04490438044500200434043b044f0020043d0430043404350436043d043e0433043e0020043f0440043e0441043c043e044204400430002004380020043f04350447043004420438002004340435043b043e0432044b044500200434043e043a0443043c0435043d0442043e0432002e002000200421043e043704340430043d043d044b04350020005000440046002d0434043e043a0443043c0435043d0442044b0020043c043e0436043d043e0020043e0442043a0440044b043204300442044c002004410020043f043e043c043e0449044c044e0020004100630072006f00620061007400200438002000410064006f00620065002000520065006100640065007200200035002e00300020043800200431043e043b043504350020043f043e04370434043d043804450020043204350440044104380439002e> /SLV <FEFF005400650020006e006100730074006100760069007400760065002000750070006f0072006100620069007400650020007a00610020007500730074007600610072006a0061006e006a006500200064006f006b0075006d0065006e0074006f0076002000410064006f006200650020005000440046002c0020007000720069006d00650072006e006900680020007a00610020007a0061006e00650073006c006a00690076006f0020006f0067006c00650064006f00760061006e006a006500200069006e0020007400690073006b0061006e006a006500200070006f0073006c006f0076006e0069006800200064006f006b0075006d0065006e0074006f0076002e00200020005500730074007600610072006a0065006e006500200064006f006b0075006d0065006e0074006500200050004400460020006a00650020006d006f0067006f010d00650020006f0064007000720065007400690020007a0020004100630072006f00620061007400200069006e002000410064006f00620065002000520065006100640065007200200035002e003000200069006e0020006e006f00760065006a01610069006d002e> /SUO <FEFF004b00e40079007400e40020006e00e40069007400e4002000610073006500740075006b007300690061002c0020006b0075006e0020006c0075006f0074002000410064006f0062006500200050004400460020002d0064006f006b0075006d0065006e007400740065006a0061002c0020006a006f0074006b006100200073006f0070006900760061007400200079007200690074007900730061007300690061006b00690072006a006f006a0065006e0020006c0075006f00740065007400740061007600610061006e0020006e00e400790074007400e4006d0069007300650065006e0020006a0061002000740075006c006f007300740061006d0069007300650065006e002e0020004c0075006f0064007500740020005000440046002d0064006f006b0075006d0065006e00740069007400200076006f0069006400610061006e0020006100760061007400610020004100630072006f0062006100740069006c006c00610020006a0061002000410064006f00620065002000520065006100640065007200200035002e0030003a006c006c00610020006a006100200075007500640065006d006d0069006c006c0061002e> /SVE <FEFF0041006e007600e4006e00640020006400650020006800e4007200200069006e0073007400e4006c006c006e0069006e006700610072006e00610020006f006d002000640075002000760069006c006c00200073006b006100700061002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400200073006f006d00200070006100730073006100720020006600f60072002000740069006c006c006600f60072006c00690074006c006900670020007600690073006e0069006e00670020006f006300680020007500740073006b007200690066007400650072002000610076002000610066006600e4007200730064006f006b0075006d0065006e0074002e002000200053006b006100700061006400650020005000440046002d0064006f006b0075006d0065006e00740020006b0061006e002000f600700070006e00610073002000690020004100630072006f0062006100740020006f00630068002000410064006f00620065002000520065006100640065007200200035002e00300020006f00630068002000730065006e006100720065002e> /TUR <FEFF005400690063006100720069002000620065006c00670065006c006500720069006e0020006700fc00760065006e0069006c0069007200200062006900720020015f0065006b0069006c006400650020006700f6007200fc006e007400fc006c0065006e006d006500730069002000760065002000790061007a0064013100720131006c006d006100730131006e006100200075007900670075006e002000410064006f006200650020005000440046002000620065006c00670065006c0065007200690020006f006c0075015f007400750072006d0061006b0020006900e70069006e00200062007500200061007900610072006c0061007201310020006b0075006c006c0061006e0131006e002e00200020004f006c0075015f0074007500720075006c0061006e0020005000440046002000620065006c00670065006c0065007200690020004100630072006f006200610074002000760065002000410064006f00620065002000520065006100640065007200200035002e003000200076006500200073006f006e0072006100730131006e00640061006b00690020007300fc007200fc006d006c00650072006c00650020006100e70131006c006100620069006c00690072002e> /ENU (Use these settings to create Adobe PDF documents suitable for reliable viewing and printing of business documents. Created PDF documents can be opened with Acrobat and Adobe Reader 5.0 and later.)>> setdistillerparams /HWResolution [600 600] /PageSize [612.000 792.000]>> setpagedevice</s>
|
<s>Microsoft Word - 1 - Some Corpus Access Tools - FULLY FINAL.docxINDIAN JOURNAL OF APPLIED LINGUISTICS VOL. 42, NO. 1-2, JAN-DEC 2016 Some Corpus Access Tools for Bangla Corpus NILADRI SEKHAR DASH Indian Statistical Institute, Kolkata ABSTRACT The techniques and strategies that are used to develop some Corpus Access Tools (CATs) for Bangla, as a part of Bangla Language Tool Kit (BLTK) are reported here. These tools are useful for retrieving linguistic data and relevant information from the modern Bangla written text corpus. At the initial stage only three tools are developed – Word Search Tool (WST), Collocation Search Tool (CST), and Sentence Search Tool (SST), which are combined into a single graphical user interface so that it can work quite elegantly to retrieve required data and information from the corpus. The hurdles that are encountered while trying to develop these tools are also addressed in this paper. Moreover, the problems and the solutions to these problems are also explicated here so that future researchers do not face much trouble to generate xml files to design new types of corpus access tool. One can visualize direct application of these tools in lexical search, concordance, collocation, language teaching, dictionary compilation, and language description. The strategies and techniques that have been adopted for developing these tools for the Bangla language corpus may also be utilized successfully to create xml files and similar tools for corpus processing for other Indian languages. Keywords: corpus, Bangla, word search, collocation, sentence, part-of-speech, tagging 1. INTRODUCTION The idea of developing Corpus Access Tools (CAT) was conceived when some Bangla corpus users asked for such NILADRI SEKHAR DASH 6devices to retrieve relevant linguistic information, data and examples from the Unicode-compatible TDIL (Technology Development for Indian Languages) Bangla written prose corpus, which is now available in Internet for free download. To serve their requirements, we have developed some simple corpus access tools based on which the end-users are be able to perform simple search functions like word search, collocation search, sentence search, etc. quite easily and quickly on the Bangla corpus. Based on the arguments of earlier scholar (Biber 1993), we believe that this tool will enable language users to retrieve necessary linguistic data, information and examples from the Bangla text corpus to address their language-related queries and questions. The entire process of tools development is divided into two stages: (a) converting UTF-8 encoded Bangla text files (i.e., corpus files) into xml files by adding xml tags to sentences, words, numerals, alphabets, and other symbols (Bird & Loper 2004), and (b) making xml files compatible for word search, collocation search, and sentence search (Bird 2005; Bird 2006). The advantages of xml files and corpus access tools are that these are useful for the people working in linguistic query answering, natural language processing, language technology, language teaching, word search, dictionary compilation, and language description in Bangla. In Section 2, I refer to some of the early works done in this area to show how it is characteristically different from the</s>
|
<s>works claimed to be done by others; in Section 3, I propose to employ a set of principles for developing these tools for Bangla corpus; in Section 4, I refer to the basic functions of the Bangla corpus search tools with reference to the Bangla corpus on which the tools run; in Section 5, I describe the process of creating the customized xml files from the UTF-8 encoded text files automatically and then using these xml.dom-based search mechanisms for parsing the xml files to generate accurate outputs; in Section 6, I register the importance of such tools in various domains of natural language processing, language technology, language teaching, and language description. In BANGLA CORPUS 7conclusion, I focus on the present state of such tools in Bangla as well as in other Indian languages corpora. 2. EARLY WORKS There are some tools that are made to access corpus not only of English but for many other languages. These tools, in actual sense, differ in goal and application from the tools reported in this paper. An example of this is the WordSmith (http://www.lexically.net/wordsmith/) tool that is used quite exhaustively for many languages across the world. This particular tool, however, does not support the Bangla language corpus, although it claims it supports to some of the Indian languages like Urdu and Hindi. The next one is Google n-gram viewer where one can search not only for word co-occurrence statistics but also how the patterns change over a long period of time (https://books.google. com/ngrams). This tool mostly operates on digital corpora procured by Google relating to languages like American English, British English, Chinese, German, Hebrew, Italian, Russian, Spanish, Dutch, French, etc. Moreover, since it allows parts-of-speech-based search, it requires corpora tagged at the part-of-speech level. This tool can be an excellent utility for Bangla and other Indian languages, but unfortunately there is no Indian language diachronic text corpus, which is available in POS tagged version to be uploaded and accessed by this tool. Until and unless Google uploads similar types of text corpora of Indian languages into its server-based interface, this will not come to help to the Indian languages. Although Indian languages are right now not supported by Google n-gram viewer, it inspires us to think for developing a tool of this kind which may be useful to have an option to display co-occurrences from Web, for which we can use n-gram options for Bangla text as well. Also it is possible to use Wikipedia dump for Indian languages to support similar queries. The third effort in this direction is initiated by the Leipzig University, Germany through development of web-based interface for 230 Corpus-Based Monolingual Dictionaries NILADRI SEKHAR DASH 8(http:corpora.uni-leipzig.de). Although the developers claimed that they have incorporated corpora of several Indian languages like Goan Konkani, Gujarati, Malayalam, Panjabi, Tamil, Telugu, Bangla, Bishnupriya, Fiji Hindi, Kannada and Western Panjabi, etc. into their collection, in reality, it is not clear how these corpora are procured, processed, and stored into the lexical database used for</s>
|
<s>the monolingual dictionaries. While we searched their corpus-based monolingual dictionaries, we came out with no result – every time we are asked to try later as the database is under maintenance. Although it seemed to be a highly useful tool for lexicographic search for the text of Indian languages, the system is not yet built up to support lexical-based search in corpora – as our tool is able to do. The fourth and more recent attempt is made by Society for Natural Language Technology Research, Kolkata (http:nltr.org) who digitize a large volume of Bangla literary texts, which, as claimed, are searchable through a simple web-based interface that works on-line. Although this searching interface appears to be simple in operation it is not platform-independent and it is not useful for object-oriented linguistic search where one is interested to identify different lexical items in the corpus with specific pre-defined contextual frames – a highly sophisticate and intelligent interactive interface, which has been adequately addressed in the tool developed by us. We are seriously thinking for integrating these literary texts developed by Society for Natural Language Technology Research into our interface as well. However, it is not possible as these resources are in highly restricted use (available only for on-line reading) without any scope for using the texts as training corpus database. 3. PRINCIPLES ADOPTED FOR CAT Following the observations of earlier scholars (Shannon 2003), it was clearly realised that unless the basic principles behind development of tools are defined it may create problem in justification of the tools in works of corpus processing. Therefore, some principles are identified and laid down in clear terms to make the entire process of tool development systematic, BANGLA CORPUS 9coordinated and unambiguous. Moreover, these principles are adopted and followed rigorously in every stage of tool development to give robust applicationality of the tools on the Bangla corpus. The principles that are adopted and followed are as follows: Principle 1: XML encoding for Sstorage The raw text files of .txt format are not very suitable for storage in a server, because processing of such files is time consuming and inefficient. So these files are required to be converted into a format which is more server-friendly. Hence the xml format is opted for the better storage of the text files. This helps the tool developers (a) to define their customised tagset by which they can tag the text data easily as well as all mark the features of words in accordance with the requirements of a particular tool, and (b) at later stages, they can easily fetch target data based on the tagset and features assigned to the texts (Klein 2006). Besides, the xml format is extensively used to store the metadata associated with each text file. This gives an option for using either raw xml files or storing the xml files within a database and using it. The former option is preferred to other for obvious reasons. The main reasons behind selecting xml files for data storage</s>
|
<s>are the followings: 1. There are many utility libraries and APIs for xml files. These make the process of extracting tag contents or selecting portions of a text or searching document headers much simplified. 2. Future enhancement of properties in text is easier in xml format. In regular format the database needs to be redesigned for increasing enhancements in search criteria. But in case of xml format addition of a few tags will do the trick. 3. Effective word tagging and retrieval technique based on xml format is much effective in text database stored in the system. 4. As data security is not a big issue in this case, there is no need to burden the corpus database with additional intratextual annotation in xml files. NILADRI SEKHAR DASH 105. Future integration of xml files with additional text databases is easy and trouble free. Thus it is pertinent to argue that the use xml format is crucial as it does not require the use of any particular software. It is highly flexible and simple in text database management. Besides, it builds up a huge support baseline into standard utilities like web browsers, database access systems, easy retrieval, and data display, etc. Principle 2: Uniformity The techniques that we had used to create the tools are not specifically designed to be useful for the Bangla text corpus only. It can also be used for corpus of other Indian languages including English. The tool will function quite effectively in those cases as well. Principle 3: Universality The techniques we had used to build the xml tagged sentences from the corpus are at par with the technology used for other languages, say, English, French or Spanish. Following the file formats used to store the British National Corpus and the Australian Corpus of English we had used the xml format as our chosen file and storage format. Principle 4: Extensibility The tagset that we had used is open for extension and modification of any kind. Moreover, if it is required, it is also possible to do further filtering of data and add more metadata into the xml files. That means by adding more tags to the sentences we can refine the search queries based on more fine-tuned criteria. Principe 5: Expandability The module allows us to expand the load of data in the corpus, if required, at any point of time. Moreover, the new text files that are added to the existing corpus database can also be converted into xml. These xml files can be uploaded to the server and be BANGLA CORPUS 11made ready for use within a very short span of time with little effort. Principle 6: Dispensability If required, all the tagged xml files can easily be converted into untagged xml files within a few steps without invocation of any complex process of text conversion. That means the original text database can be easily retrieved from the corpus tagged with metadata. Principle 7: User friendly The meaning and the purpose of the</s>
|
<s>tagset used for our work are kept maximally open so that future enhancement of the existing coding system or the method is easily carried out. Principle 8: One-to-one output In our module, a sentence has just one xml tagged form. It does not allow a sentence to have two different xml tagged forms. In the reverse scheme – reverting from an xml tagged sentence to a normal one will uniquely give the sentence from which it is generated. This implies that no ambiguity is permitted to arise within the process of file conversion. 4. THE CORPUS ACCESS TOOL The CAT is a java-based application, which is designed to perform word – collocation – and sentence-based search in the TDIL Bangla prose text corpus. In this context, it is necessary to refer to content and composition of the TDIL Bangla corpus on which our tool runs. We have downloaded this Bangla corpus from the web as it is made available as an open resource for research and development purposes. This corpus contains five million words of modern Bangla prose texts that were published between 1981 and 1995. The texts covered more than 85 subject domains with proportional representation of text samples from each domain to make the corpus maximally authentic about the present status of the language used in the state of West Bengal, India. NILADRI SEKHAR DASH 12Initially the corpus was developed in the ISCII (Indian Standard Code for Information Interchange) format, and because of this format the corpus was not easy to use in any work of linguistics or language technology. In recent times, however, it is converted into Unicode compatible format and due to this fact the corpus is made available for global access and utilization. Moreover, the texts in the Unicode version are roughly normalized with removal of crude orthographic errors and junks. At present the corpus has 1270 files – each file containing nearly 500 sentences in the form of a continuous text without any pause/break at the end of a sentence or a paragraph (Dash 2007). While formatting the files we observed that each file had two major parts. The first part contained the elaborate metadata in the form of a Header File that recorded the details of extra-textual information relating to the source of the text. The second part, on the other hand, contained the Bangla prose text in standard Bangla orthography. We had to use this particular corpus as our primary text database because there was no other corpus so well framed and well formatted to be maximally useful for our works. Moreover, this corpus had also been the primary database for verification and validation of the search functions that are currently supported by the CAT we have developed. At present the CAT supports the following three search functions (Christ 1994): 1. Word Search, 2. Sentence Search, and 3. Collocation Search Each search function is explicated with some examples obtained from the corpus under our disposal. 4.1. Word search The word search tool</s>
|
<s>works for finding out that particular word, which a user wants to find out in the corpus. At first a user enters a Bangla word (or a substring of it) into the search interface. Since the pipeline does not include the process of lemmatization, the sub-string search algorithm does not yield information about BANGLA CORPUS 13lemmas but matches the input sub-string with that of a larger string to catch a word. The tool then takes the word as an input, searches through the corpus, and shows up all the sentences that contain that particular word (or the substring of it) as these are found in the corpus. It also provides user with the sentence number and the file names which contain the word that the user searches. The user has the liberty to specify a particular filename or a topic name, if the user wants, to restrict the search to a particular file or a specific domain or a subject topic. This strategy has several functional advantages. First, it not only searches out the target word a user wants from the corpus, but also helps a user to find out different forms of a particular word (e.g., affixed, inflected, non-inflected, duplicated, compounded, etc.) used in the corpus, along with the details of their usage patterns, frequency of use in a particular topic, and different meanings the words evoke in various contextual frames they occur. A simple search for the word মাথা [māthā] “head”, for instance, produces more than hundred different sentences from the corpus, based on which it is possible to sub-classify manually the sentences (S) further in accordance with their meanings (M) denoted by the word in its occurrence within different syntactic-cum-semantic contexts (C), as the following examples and the diagram (Figure 1) show. a. স ei দেলর মাথা se ei daler māthā “He is the leader of this gang” b. তার মাথা ফেটেছ। tār māthā pheteche “His head is bleeding” c. পাহােড়র মাথায় তার বািড ়। pāhārer māthāy tār bāri “His house is at the top of the hill” d. eবার মাথায় িচrিন দাo। ebār māthāy chiruni dāo “Now, comb your hair” e. আ ুেলর মাথাটা ব থা করেছ। āṅguler māthāta bythā karche “The tip of the finger is paining” NILADRI SEKHAR DASH 14f. চার মাথার মােড় তার দাকান। chār māthār more tār dokān “His shop is at the crossing of four roads” g. ছেলিটর aে বশ মাথা। cheletir aṅke beś māthā “The boy is intelligent in Mathematics” মাথা (māthā) C1 C2 C3 C4 C5 C6 C7 C8 C9 C…. ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ M1 M 2 M 3 M 4 M 5 M 6 M 7 M 8 M 9 M …. ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ S1 S2 S3 S4 S5 S6 S7 S8 S9 S…. Figure 1. Sub-classification of sentences based on meaning This work, however, is done manually, since S, C, and M level information is NOT coded into the corpus and the</s>
|
<s>search does not yield this output. The search is only giving out a syntactic frame from where S, C, and M information can be manually retrieved. Since the tool also provides an option for substring- based search, a user can find out different structural forms of the same word from the corpus. For example, substring-based search for the word কর (kar) extracts several forms of different grammatical categories to show how various surface forms of different parts-of-speech, which exhibit orthographic proximities, may be clustered together under one base/root (Figure 2). BANGLA CORPUS 15কর (kar) Present: কির (kari), কর (kar),কের (kare) Past: করলাম (karlām), করেল (karle), করেলন (karlen) Future: করব (karba), করেব (karbe), করেবন (karben) Non-Finite: কের (kare), করেল (karle), etc. Infinitive: করেত (karte), etc. Causative: করাব (karāba), করাল (karāla), করাত (karāta) Gerundial: করােনা (karāno), etc. Figure 2. Clustering of words under single base/root This, however, also yields a lot of ambiguous results because the substring-based search for the word কর (kar) also extracts many other forms that do not belong to this set. Rather than considering it as a fault of the tool, we consider it as an advantage, as it produces all the words that are made up with the sub-string কর (kar) at initial position. Further scheme of lexical classification will ensure that words are classified in appropriate lexical groups even though these are initially compiled together as an obvious consequence of a mere orthographic match. This allows a user not only to assemble simple forms of the base/root but also helps him/her to assemble all the inflected forms (and other forms) of the word available in the corpus, as the above diagram shows. After studying the association patterns of other words with কর (kar) or all other forms generated from কর (kar), a user easily traces conceptual proximity or semantic closeness among the forms with regard to their usages in the natural Bangla texts (Bird, Klein, Loper & Baldridge 2008). Based on this kind of observation we may formulate a hypothesis that words representing similar orthographic structures may have some conceptual alliances and semantic affinities. It is also possible to infer that they are derived from the same etymological antiquity due to which they exhibit almost similar semantico-grammatical patterns of occurrence in the language (Holmes, Ahmad & Abidi 1994). NILADRI SEKHAR DASH 164.2. Collocation search The next tool that we have developed can list up and show up all collocated words based on user input. First it invites a user to enter a Bangla word (Target Word = TW) in the interface. Once the TW is entered, it tries to retrieve all the words that are used immediately before (Type-I) and after (Type II) the TW in the corpus. Then it lists up all the word pairs and displays the same in a frequency-based sequential order in which the most frequently used word-pair occupies the first position followed by others, as the examples show (Table 1). Table 1. Collocation pattern of হাত (hāt) “hand” Type-I: মাথায়হাত māthāy</s>
|
<s>hāt “hand on head” বুেকহাত buke hāt “hand on chest” মুেখহাত mukhe hāt “hand on mouth” গােয়হাত gāye hāt “to punish” পােয়হাত pāye hāt “to show respect” মুখহাত mukh hāt “face and hand” কােলাহাত kālo hāt “black hand” জলহাত jal hāt “wet hand” eকহাত ek hāt “one hand” আপনাহাত āpnā hāt “self hand” Type-II: হাতপা hāt pā “hand and leg” হাতমুখ hāt much “hand and face” হাতসাফাi hāt sāphāi “stealing” হাতকাটা hāt kātā “cut off hand” হাতকড়া hāt karā “handcuff” হাতছািন Hātchāni “hand waving’ হাতপাখা hāt pākhā “hand fan” হাতপাতা hāt pātā “begging” হাতটান hāt tān “miserliness” To explicate the process of collocation search we have presented below a list of collocations made with the word মাথা (māthā) (these are fished out from the corpus with our word-search tool) to show how the word is able to exhibit its usage patterns and meaning variations within different collocation frames obtained from larger syntactic strings (Table 2). BANGLA CORPUS 17Table 2. Search results corresponding to মাথা (māthā) মােছরমাথা (mācher māthā) “head of fish” ছাতারমাথা (chātār māthā) “useless thing” আলমািররমাথা (ālmārirmāthā) “top of cupboard” gােমরমাথা (grāmer māthā) “village head” পাহােড়র (pāhārer māthā) “peak of a hill” আ েুলরমাথা (āṅguler māthā) “finger tip” জেলরমাথা (jaler māthā) “water surface” নদীরমাথা (nadir māthā) “source of river” রাsারমাথা (rāstār māthā) “end of road” রাsারমাথা (rāstār māthā) “crossing of roads” পাকামাথা (pākā māthā) “sharp brain” kুেররমাথা (kSurer māthā) “blade of razor” কােজরমাথা (kājer māthā) “sense of work” aে রমাথা (aṅker māthā) “knowledge of Maths” দেলরমাথা (daler māthā) “captain of team” গ ােঙরমাথা (gyāṅer māthā) “gang leader” নৗকারমাথা (naukār māthā) “head of boat” aিফেসরমাথা (aphiser māthā) “boss in office” কাmািনরমাথা (compānir māthā) “company owner” পিরবােররমাথা (paribārer māthā) “head of a family” মাথায়মাথায় (māthāy māthāy) “of equal height” কাগেজরমাথা (kāgajer māthā) “margin of paper” িবছানারমাথা (bichānār māthā) “edge of bed” দেশরমাথা (deser māthā) “president of country” ফাড়ারমাথা (phorār māthā) “mouth of pimple” মুdারমাথা (mudrār māthā) “side of a coin” পেনরমাথা (pener māthā) “cap of pen” সাজামাথা (sojā māthā) “straight face” uঁচুমাথা (ũchu māthā) “high pride” In reality the tool categorically identifies the number of times each TW is noticed to be used in the corpus in its specific collocational environments. Here again a user can also monitor and restrict his/her search to a particular topic or a particular file by way of specifying the filename or the topic name of a text. Although we know that the efficiency of a tool is substantially improved by adding techniques for noise removal; replacement of raw and and replacecorpus, we hPOS taggedsuch a resouNOT operatresults are nhave given bcollocation oFigure 3. C4.3. SentencThe third toto generate words. A usupposed tofrom the cospecified nunoisy corpuement of unhave not beed corpus to iurce is not ate on POS bnot comparebefore a screof a word deCollocation oce search ol that we haa list of senuser has too be present orpus all theumber of wous with a norn-annotated en able to apincrease the available in based informd to any geneen shot whicrived from thof utরািধকার (uave developentences</s>
|
<s>that o provide thin a sentence sentences ords along wNILADRrmalized andcorpus withpply our toolevel of it aBangla. Sinmation to retrneric collocach displays she corpus (Futtarādhikāred can be uscontain spehe exact nuce and the tthat are mawith respectivRI SEKHAR Dd noise-free h a POS taol on a noiseaccuracy becce this tool rieve resultsation search.ome exampligure 3). r) “inheritancsed on the cocific numberumber of wtool will fishade of with ve filenamesDASH one; gged e-free cause does s, the . We les of ce” orpus rs of words h out that s and BANGLA CORPUS 19sentence numbers. Here again the user is free to restrict his/her search to a particular topic or a file by way of specifying the filename or the topic name (Vaughan & Thelwall 2004). This tool is highly useful for finding out the particular type of sentences, which tend to be of smaller or larger in size with regard to the number of words. For instance, if a user wants to find out sentences made with just two words, s/he has to specify the number of words in a sentence, and the tool will retrieve all the sentences made with just two words from the corpus as the following examples show. a. কাথায়চলেল? kothāy challe? “Where are you going?” b. হভাগবান! he bhagabān! “Oh my God!” c. বসুনআপিন। basun āpni “you sit” (Hon.) d. িলেখেফল! likhe phela! “Write down!” e. আপিনমহান। āpni mahān “you are great.” What is most striking in this application is that it fishes out not only the simple declarative sentences, but also sentences of other types including that of imperative, interrogative, and exclamatory ones. Such sentences are hardly used in statement or narration but used mostly in conversation and dialogic interaction. Since there is no coding for sentences in the corpus, it has become an easy task for the tool to count the number of words and get them listed in the final output. The advantage of this application lies in its ability for extracting sentences based on the number of words from the specified files, topics or the whole corpus as well as its ability in capturing the finer discourse-related information embedded in the sentences. Thus, with the help of this tool a user can easily find out the sentences made up with varied number of words as well as can calculate their frequency of occurrence in different text types included in the corpus (Robinson, Aumann & Bird 2007). However, it should be clearly stated here that it is not an attempt to link sentence length with complexity. We have NOT tried this because we know that to establish such a link, we need to devise a matrix that will provide some parameters for sentence complexity. The frequency count that we get at word level for a sentence is a NILADRI SEKHAR DASH 20very simple count without scaling the text types in the corpus (i.e., sports, finance, science, law, etc.). Given below is a screen shot that cites a list of sentences containing 15</s>
|
<s>words and these sentences are obtained from the Bangla corpus (Figure 4). Figure 4. Search of sentences containing 15 words The main motive behind developing such tools is to provide linguistic researchers some useful devices for performing rudimentary tasks relating to search, count and sort out lexico-syntactic data and information available in the Bangla corpus (Sinclair 2004). Although all the functions carried out by the tools can also be done manually, it will consume enormous amount of time to complete the tasks which, on the other hand, can be done by a computer within the shortest span of time. Moreover, in case of machine-based application, the probability of error is near to null, while in case of manual calculation, the amount of error is very high and not beyond question. Therefore, it is rational to use BANGLA CORPUS 21automated search tools to increase the level of accuracy as well as efficiency in the tasks carried out on the corpus database. Furthermore, in the cases where trained human resource is not readily available, one can use these tools to execute research goals successfully. For instance, for collocation analysis of words, instead of spending the entire life of an investigator to find out words that are used immediately before and after the TW, a user can employ this tool to do the same task easily, quickly, and accurately. The accepted utility of the CAT is, in fact, equal to many Python and R libraries and packages as it allows users to accomplish a lot of what the tool promises to do and more. 5. IMPLEMENTATION The entire scheme is envisioned to be implemented in four primary steps: (a) assessing user requirements; (b) creating xml files of the whole corpus; (c) creating user-friendly tools for handling xml files and generating the results for the end users; and (d) developing the user interface. 5.1. Assessing user requirements Our primary goal is to develop tools for word as well as collocation search from the whole corpus database or from a particular file or a specific text type. Also, we wanted to develop a tool that counts number of words and sentences in a text file, in a topic or in the whole corpus. Moreover, we decided to develop a tool which will find out sentences containing specific number of words. In all these activities our main focus is to make the tools maximally responsive so that these tools are utilised by the end users without less complication (Arnold 2000). To achieve this mission, we are planning to collect user queries during beta test of the tool which can help us understand the needs of the users more accurately as well as redesign the tool (if needed) to gain from the generalizations drawn from the users’ input. Moreover, since the tool is Java based, instead of XML, we are seriously thinking using JSON (JavaScript Object Notation) that NILADRI SEKHAR DASH 22we hope may activate better interface for global access of the tools. 5.2. Creating xml files</s>
|
<s>of the corpus We are provided with UTF-8 encoded 1270 text files of the Bangla prose texts. Each text file contains metadata in the form of a Header File with 8 (eight) entries: file name, topic name, sub-topic name, year of publication, medium of publication, name of the published material in Bangla, author’s name, and page number of the document from where the text is taken. We have first developed a java program to create xml files from those text files based on the file names and/or topic names. At first we have identified 1270 files based on filenames. In the second stage, based on topic names, we have classified these files into nine (9) text categories: literature, fine arts, natural science, social science, commerce, school text, medical text, legal text, and mass media text. These major text categories cover all the files that we have used for our purpose. Some text files, however, contain various sub-types of texts. To manage these files, we have classified them into 27 general xml classes, each one of which contain 50 files each irrespective of topic or filename. These are required because when a user tries to search into the total corpus database, these 27 xml classes will be parsed first. On the other hand, when a user will search in the files based on a topic name or a filename, the respective file of that name will be used to accelerate the search process. We have thoughtfully adopted this strategy to avoid creating an exceptionally large xml file consisting of the whole corpus of 1270 text files. Hence the corpus is sub-divided into 50 files each and each file is converted into an xml file. Since the same text file can be used for different corpus-based searches (e.g., topic-based search, file-based search, etc.), we want to keep the backup files ready so that if one file-search algorithm fails (say, the topic-based search), a user who is searching for a word in the whole corpus, can still find the word with the help of other types of search. The well-formed tagset is used to create the xml files because the corpus is the document root. It is divided into multiple files – each one having an attribute file number as well as a Header, a BANGLA CORPUS 23Text, and an Extent as its children. The Header contains a single child type, which again contains a single child with specific child-type name. For instance, consider the field: Book. The sub-division of the type is done because if, in future, text data from other sources like newspapers, journals, pamphlets etc. are also added to the corpus, their Header field will vary to a large extent to register their separate subject category as well as linguistic identity. Moreover, by adding some extra field types a user can easily check which type of text data is obtained and from which source. The use of the Header format for this kind of operation, in our view, will be</s>
|
<s>easier and effective instead of modifying the entire code set. For instance, the field “Book” contains the following six sub-heads: (a) Header • Topic: Topic of data, e.g., Commerce • Sub-topic: Sub-topic of data, e.g., Accountancy • Author: Name of author (if not specified, the field is left empty) • Time: Time of publication (if not specified, it remains empty) • Book Name: Name of the book in Bangla script, e.g., িহসাবশাst (hisābśāstra) “Accountancy”. • Page: Page numbers of the book from where the text data is taken. (b) Text Each text file contains large number of sentence(s) – each one having an attribute ‘sentence number’ (sen), children (sentence body) and the number of words in that sentence. The sentence body, in actuality, consists of words, alphabets, digits, punctuation marks, and other characters that constitute to form the structure of a sentence as a string of many words marked with empty spaces. The programmatic representation of the algorithm looks like the followings (Bloch 2005): • s: sentence with sentence body and no. of words, have attribute n and children (sen) and “now.” • sen: sentence body consisting words, punctuations, etc. NILADRI SEKHAR DASH 24It is to be noted here that if part-of-speech tagging has to be done in future it can be added as an attribute to the field w. o w: word o p: punctuation having attribute type which can have values pnt, pnc, pul and pur. pnt: terminating punctuations which end a sentence like |, !, and ? pnc: punctuations where sentence doesn’t end, like ; , : etc. pul: left brackets, [,{,( . pur: right brackets, ],},). The most notable thing is that both the left and the right brackets are marked because English meanings of some of the Bangla scientific words and technical terms are specified by enclosing them within brackets. If, for any reason, those meanings associated with words are required for some analysis, they can be used. On the other hand, if anyone wants to extract and display the Bangla words only, it can also be done by hiding the words that contain English meaning within brackets. Further, other symbols that are encoded to define specific textual functions in the texts are the followings: o q : quotations “ , ” o n: for numerals like 5 or 5th etc. o a: for all single letter entities o s: for symbols like &, @,#, $,%,^,*,<,>,-,_,+,=,\ and / • now: number of words in that sentence (c) Extent Additional information of words is embedded within the domain: Extent. That means it contains details about the extent of each file and has two children, as the followings: • no_of_word: Total number of words in a file • no_of_sentence: Total number of sentences in a file Based on the strategies and techniques stated above each sentence in each text file of the Bangla text corpus is tagged in the following manner (Figure 5): BANGLA CORPUS 25File Name: BACO8 Text Type: Commerce Sentence No: 3 Untagged sentence: িহসাবতtt</s>
|
<s>সmেn আেলাচনা r করার পূেব তtt বলেত িক বাঝায় সটা s হoয়া দরকার। Tagged sentence: <s n="000003"> <sen> <w>িহসাবতtt</w><w>সmেn</w><w>আেলাচনা</w><w> r</w><w>করার</w><w>পূেব</w><w>তtt</w><w>বলেত</w><w>িক</w><w> বাঝায়</w><w> সটা</w><w>s </w><w>হoয়া</w><w>দরকার</w> <p type="pnt">।</p></sen> <now>13</now> </s> Figure 5. Sample of a tagged Bangla sentence At the end of this initial preparation phase we have the xml documents ready from where necessary data can be retrieved by the application of the tools. 5.3. Creating web-based tools We have used JSP for creating the application, since it is platform-independent and very much convenient to work with. The code which we have used is very simple JAVA code for XML parsing that produces the result set, which is further refined to produce the final results that are expected from the programme. We have relied heavily on W3C document object model and the javax.xml parsers. In most cases, we have used xml.dom parsing for searching words, substrings of words, and collocations besides other functions (Brill 2002; Frath & Gledhill 2005). The primary advantages of using JAVAX.XML and the W3C.DOM libraries are that these operations are less complex in nature and consume less amount of time to parse the xml files due to which retrieval of results often becomes much faster. Thus it reduces the load of latency of the application quite significantly. NILADRI SEKHAR DASH 26The resultant output, at the end of this phase, is the development of a functional application based interface, which is highly capable of performing all the three major functions specified in Section 2. 5.4. Developing the user interface The actual user interface is developed in a simple manner minimizing the amount of complexities in the operation of the system. Keeping the needs and levels of efficiency of the end-users in mind, the interface is developed in such a manner that it is able to alleviate the load of work of the end users as much as it can. A simple look at the interface will show that it is self-sufficient and self-explanatory (Figure 6). Figure 6. A simple interface of the BCST We believe that any one, following our strategy and techniques, can easily develop similar corpus searching tools for any Indian language corpus. There are, however, some pre-defined guidelines that cannot be ignored at the time of developing such an interface. The guidelines are the followings: BANGLA CORPUS 271. The text files of the corpus should have same header format with eight entries, 2. Each word, number, symbol, as well as every separate lexical entity should clearly be demarked with a space before and after it in the normalized corpus. If these two basic guidelines are followed then the same JAVA program that we have used here can also be used for creating xml files from any kind of text corpus of Bangla as well as from other natural languages across the synchronic range and diachronic variations. Moreover, the xml files from corpora of other Indian languages can also be created with some rudimentary modifications of the program designed by us. Furthermore, once the xml files are</s>
|
<s>created from JAVA program, the same set of JSP applications that we have designed can be used on other Bangla corpora or the corpora of other Indian languages. This observation stands valid for word-based and lexical collocation based search operations as well as for similar functions that are applied on digital text corpora. 6. IMPORTANCE OF THE XML FILES AND TOOLS The importance of xml files of digital texts is immense in various domains of linguistic and language technology related research and application. These xml files, along with basic word tagging techniques, can be aptly utilized for part-of-speech tagging of words as well as for web-based application like searching, sorting, and analyzing POS tagged texts. In descriptive and applied linguistics such files are highly useful for frequency calculation of words, lexical sorting, basic vocabulary compilation, and language teaching (Anthony 2004). On the other hand, the utility of corpus-accessing tools is enormous in natural language processing, language technology and empirical linguistic studies. We visualise that the CAT that we have developed, can be utilized by language researchers in various NLP applications, such as, developing interfaces for grammar checking, developing systems for capturing usage patterns of words, making concordance of word use, grouping NILADRI SEKHAR DASH 28local words, defining distribution patterns of words, identifying patterns of lexical combination, lexical collocation, parsing sentence, addressing queries, modelling linguistic search in corpus, information retrieval, extraction of grammatical properties, text understanding, E-learning, E-governance, and many other works (Hearst 2005; Wallis and Nelson 2001). 7. CONCLUSION Since there are several merits of having corpus-search tool, the present attempt should be appreciated in open heart although, on technical merits, it needs some amount of polishing to make it platform independent and user-friendly. To make it happen we are trying to arrange the output from the search queries in a more usable format, such as CSV (Comma-separated values) file format, so that future statistical language models could be trained on the query output. In fact, the basic summary statistics that are elicited from this format should also form a part of the query results referring to the overall distribution of the target lexical items in the corpus database vis-à-vis in the language. Moreover, to make this tool maximally useful, we need to apply some development iterations before it could become truly indispensable in the area of corpus-based lexical search. Although we can visualize many more applications of these tools in Indian language corpora, not much effort has been initiated to develop such tools for most of the Indian language corpora. However, in recent years, a significant rise in number of Indian scientists getting associated with this kind of works has been noticed and such a positive phenomenon makes us happy with a promising aspiration and healthy expectation. However, the shady side of this collective enterprise is that whatever tools are developed for the Indian language corpora, their rate of accuracy and efficiency is far below the level if these are compared with the corpus processing tools developed of English</s>
|
<s>and other advanced languages of the world. Moreover – most of the time – tools that are claimed to be developed for one Indian language are not available for other Indian languages. Our tool is absolutely free from this limitation as it is freely available BANGLA CORPUS 29for one and all. This indicates that we sincerely need to take collective initiatives develop good and useful corpus processing and accessing tools and devices for all Indian language corpora to analyze these corpora as well as retrieve information to be used in all spheres of language-related activity. REFERENCES Anthony, L. 2004. AntConc: A learner and classroom friendly, multi-platform corpus analysis toolkit. In Proceedings of WILL 2004: An Interactive Workshop on Language e-Learning (pp. 7-13). Arnold, K. 2000. The Java Programming Language. New Delhi: Pearson Education. Biber, D. 1993. Using register diversified corpora for general language studies. Computational Linguistics, 2, 219-241. Bird, S. 2005. NLTK-Lite: Efficient scripting for natural language processing. In Proceedings of 4th International Conference on Natural Language Processing (pp. 1-8), IIT- Kanpur, India, Nov. ——. 2006. NLTK: The natural language toolkit. In Proceedings of the COLING/ACL 2006 Interactive Presentation Sessions (pp. 69-72), Association for Computational Linguistics, Sydney, Australia, Jul. ——. 2008. Defining a core body of knowledge for the introductory computational linguistics curriculum. In Proceedings of the 3rd Workshop on Issues in Teaching Computational Linguistics, Association for Computational Linguistics. ——. & Loper, E. 2004. NLTK: The natural language toolkit. In Companion Volume to the Proceedings of 42st Annual Meeting of the Association for Computational Linguistics (pp. 214-217), Association for Computational Linguistics. ——., Klein, E., Loper, E. & Baldridge, J. 2008. Multidisciplinary instruction with the natural language toolkit. In Proceedings of the Third Workshop on Issues in Teaching Computational Linguistics (pp. 62-70), Association of Computational Linguistics. Bloch, J. 2005. Java Puzzlers: Traps, Pitfalls and Corner Cases. New Delhi: Pearson Education. BNC. Available online: <http://corpus.byu.edu/bnc/>. BNC. Available online: <www.natcorp.ox.ac.uk/docs/URG/ index.html>. Brill, G. 2002. CodeNotes for XML. New York: Random House. Christ, O. 1994. A modular and flexible architecture for an integrated corpus query system. In Proceedings of COMPLEX’94: 3rd NILADRI SEKHAR DASH 30Conference on Computational Lexicography and Text Research (pp 23-32), Budapest, Hungary, 7-10 Jul. Coding tutorials. Available online: <http://www.w3schools.com/>. Dash, N. S. 2007. Indian scenario in language corpus generation. In N. S. Dash, P. Dasgupta & P. Sarkar (Eds.), Rainbow of Linguistics (pp. 129-162). Vol. 1. Kolkata: T. Media Publication. Frath, P. & Gledhill, C. 2005. Free-range clusters or frozen chunks? Reference as a defining criterion for linguistic units. Recherches Anglaises et Nord-américaines, 38, 25-43. Hearst, M. 2005. Teaching applied natural language processing: Triumphs and tribulations. In Proceedings of the Second ACL Workshop on Effective Tools and Methodologies for Teaching NLP and CL (pp. 1-8), Association for Computational Linguistics, Ann Arbor, Michigan, Jun. Holmes, H. P. R., Ahmad, K. & Abidi, S. 1994. A description of texts in a corpus: Virtual and real corpora. In W. Martin, W. Meijs, M. Moerland, E. ten Pas, P. van Sterkenburg & P. Vossen (Eds.),</s>
|
<s>Proceedings of the 6th EURALEX International Congress on Lexicography (pp. 390-402), Amsterdam, The Netherlands. Klein, E. 2006. Computational semantics in the natural language toolkit. In Proceedings of the Australasian Language Technology Workshop (ALTW2006) (pp. 26-33). Robinson, S., Aumann, G. & Bird, S. 2007. Managing fieldwork data with toolbox & the natural language toolkit. Language Documentation and Conservation, 1, 44-57. Shannon, C. 2003. Another breadth-first approach to CSI using Python. In Proceedings of the 34th SIGCSE Technical Symposium on Computer Science Education (pp. 248-251). Sinclair, J. 2004. Intuition and annotation-the discussion continues. In K. Aijmer & B. Altenberg (Eds.), Advances in Corpus Linguistics: Papers from the 23rd International Conference on English Language Research on Computerized Corpora (ICAME 23) (pp. 39-59). Amsterdam/New York: Rodopi. Vaughan, L. & M. Thelwall. 2004. Search engine coverage bias: evidence and possible causes. Information Processing and Management, 40/4, 693-707. Wallis, S. & Nelson, G. 2001. Knowledge discovery in grammatically analysed corpora. Data Mining and Knowledge Discovery, 5, 307-340. BANGLA CORPUS 31ACKNOWLEDGEMENT I gladly acknowledge the technical support I received from Manomita Das and Kalpendu Das, two B. Tech students of the Bengal College of Engineering and Technology, Durgapur, West Bengal, for properly carrying out the work of developing the tools described above on the Bangla text corpus. DR. NILADRI SEKHAR DASH LINGUISTIC RESEARCH UNIT, INDIAN STATISTICAL INSTITUTE, INDIA. E-MAIL: <NS_DASH@YAHOO.COM> PH: 91-9433190295 /ASCII85EncodePages false /AllowTransparency false /AutoPositionEPSFiles true /AutoRotatePages /All /Binding /Left /CalGrayProfile (Dot Gain 20%) /CalRGBProfile (sRGB IEC61966-2.1) /CalCMYKProfile (U.S. Web Coated \050SWOP\051 v2) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Warning /CompatibilityLevel 1.4 /CompressObjects /Tags /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages true /CreateJDFFile false /CreateJobTicket false /DefaultRenderingIntent /Default /DetectBlends true /DetectCurves 0.0000 /ColorConversionStrategy /LeaveColorUnchanged /DoThumbnails false /EmbedAllFonts true /EmbedOpenType false /ParseICCProfilesInComments true /EmbedJobOptions true /DSCReportingLevel 0 /EmitDSCWarnings false /EndPage -1 /ImageMemory 1048576 /LockDistillerParams false /MaxSubsetPct 100 /Optimize true /OPM 1 /ParseDSCComments true /ParseDSCCommentsForDocInfo true /PreserveCopyPage true /PreserveDICMYKValues true /PreserveEPSInfo true /PreserveFlatness true /PreserveHalftoneInfo false /PreserveOPIComments false /PreserveOverprintSettings true /StartPage 1 /SubsetFonts true /TransferFunctionInfo /Apply /UCRandBGInfo /Preserve /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true /NeverEmbed [ true /AntiAliasColorImages false /CropColorImages true /ColorImageMinResolution 300 /ColorImageMinResolutionPolicy /OK /DownsampleColorImages true /ColorImageDownsampleType /Bicubic /ColorImageResolution 300 /ColorImageDepth -1 /ColorImageMinDownsampleDepth 1 /ColorImageDownsampleThreshold 1.50000 /EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages true /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] /ColorImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 300 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 1.50000 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] /GrayImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 1200 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 1200 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.50000 /EncodeMonoImages</s>
|
<s>true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1 /AllowPSXObjects false /CheckCompliance [ /None /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXOutputIntentProfile () /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False /Description << /CHS <FEFF4f7f75288fd94e9b8bbe5b9a521b5efa7684002000500044004600206587686353ef901a8fc7684c976262535370673a548c002000700072006f006f00660065007200208fdb884c9ad88d2891cf62535370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c676562535f00521b5efa768400200050004400460020658768633002> /CHT <FEFF4f7f752890194e9b8a2d7f6e5efa7acb7684002000410064006f006200650020005000440046002065874ef653ef5728684c9762537088686a5f548c002000700072006f006f00660065007200204e0a73725f979ad854c18cea7684521753706548679c300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c4f86958b555f5df25efa7acb76840020005000440046002065874ef63002> /DAN <FEFF004200720075006700200069006e0064007300740069006c006c0069006e006700650072006e0065002000740069006c0020006100740020006f007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400650072002000740069006c0020006b00760061006c00690074006500740073007500640073006b007200690076006e0069006e006700200065006c006c006500720020006b006f007200720065006b007400750072006c00e60073006e0069006e0067002e0020004400650020006f007000720065007400740065006400650020005000440046002d0064006f006b0075006d0065006e0074006500720020006b0061006e002000e50062006e00650073002000690020004100630072006f00620061007400200065006c006c006500720020004100630072006f006200610074002000520065006100640065007200200035002e00300020006f00670020006e0079006500720065002e> /DEU <FEFF00560065007200770065006e00640065006e0020005300690065002000640069006500730065002000450069006e007300740065006c006c0075006e00670065006e0020007a0075006d002000450072007300740065006c006c0065006e00200076006f006e002000410064006f006200650020005000440046002d0044006f006b0075006d0065006e00740065006e002c00200076006f006e002000640065006e0065006e002000530069006500200068006f00630068007700650072007400690067006500200044007200750063006b006500200061007500660020004400650073006b0074006f0070002d0044007200750063006b00650072006e00200075006e0064002000500072006f006f0066002d00470065007200e400740065006e002000650072007a0065007500670065006e0020006d00f60063006800740065006e002e002000450072007300740065006c006c007400650020005000440046002d0044006f006b0075006d0065006e007400650020006b00f6006e006e0065006e0020006d006900740020004100630072006f00620061007400200075006e0064002000410064006f00620065002000520065006100640065007200200035002e00300020006f0064006500720020006800f600680065007200200067006500f600660066006e00650074002000770065007200640065006e002e> /ESP <FEFF005500740069006c0069006300650020006500730074006100200063006f006e0066006900670075007200610063006900f3006e0020007000610072006100200063007200650061007200200064006f00630075006d0065006e0074006f0073002000640065002000410064006f0062006500200050004400460020007000610072006100200063006f006e00730065006700750069007200200069006d0070007200650073006900f3006e002000640065002000630061006c006900640061006400200065006e00200069006d0070007200650073006f0072006100730020006400650020006500730063007200690074006f00720069006f00200079002000680065007200720061006d00690065006e00740061007300200064006500200063006f00720072006500630063006900f3006e002e002000530065002000700075006500640065006e00200061006200720069007200200064006f00630075006d0065006e0074006f00730020005000440046002000630072006500610064006f007300200063006f006e0020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e003000200079002000760065007200730069006f006e0065007300200070006f00730074006500720069006f007200650073002e> /FRA <FEFF005500740069006c006900730065007a00200063006500730020006f007000740069006f006e00730020006100660069006e00200064006500200063007200e900650072002000640065007300200064006f00630075006d0065006e00740073002000410064006f00620065002000500044004600200070006f007500720020006400650073002000e90070007200650075007600650073002000650074002000640065007300200069006d007000720065007300730069006f006e00730020006400650020006800610075007400650020007100750061006c0069007400e90020007300750072002000640065007300200069006d007000720069006d0061006e0074006500730020006400650020006200750072006500610075002e0020004c0065007300200064006f00630075006d0065006e00740073002000500044004600200063007200e900e90073002000700065007500760065006e0074002000ea0074007200650020006f007500760065007200740073002000640061006e00730020004100630072006f006200610074002c002000610069006e00730069002000710075002700410064006f00620065002000520065006100640065007200200035002e0030002000650074002000760065007200730069006f006e007300200075006c007400e90072006900650075007200650073002e> /ITA <FEFF005500740069006c0069007a007a006100720065002000710075006500730074006500200069006d0070006f007300740061007a0069006f006e00690020007000650072002000630072006500610072006500200064006f00630075006d0065006e00740069002000410064006f006200650020005000440046002000700065007200200075006e00610020007300740061006d007000610020006400690020007100750061006c0069007400e00020007300750020007300740061006d00700061006e0074006900200065002000700072006f006f0066006500720020006400650073006b0074006f0070002e0020004900200064006f00630075006d0065006e007400690020005000440046002000630072006500610074006900200070006f00730073006f006e006f0020006500730073006500720065002000610070006500720074006900200063006f006e0020004100630072006f00620061007400200065002000410064006f00620065002000520065006100640065007200200035002e003000200065002000760065007200730069006f006e006900200073007500630063006500730073006900760065002e> /JPN <FEFF9ad854c18cea51fa529b7528002000410064006f0062006500200050004400460020658766f8306e4f5c6210306b4f7f75283057307e30593002537052376642306e753b8cea3092670059279650306b4fdd306430533068304c3067304d307e3059300230c730b930af30c830c330d730d730ea30f330bf3067306e53705237307e305f306f30d730eb30fc30d57528306b9069305730663044307e305930023053306e8a2d5b9a30674f5c62103055308c305f0020005000440046002030d530a130a430eb306f3001004100630072006f0062006100740020304a30883073002000410064006f00620065002000520065006100640065007200200035002e003000204ee5964d3067958b304f30533068304c3067304d307e30593002> /KOR <FEFFc7740020c124c815c7440020c0acc6a9d558c5ec0020b370c2a4d06cd0d10020d504b9b0d1300020bc0f0020ad50c815ae30c5d0c11c0020ace0d488c9c8b85c0020c778c1c4d560002000410064006f0062006500200050004400460020bb38c11cb97c0020c791c131d569b2c8b2e4002e0020c774b807ac8c0020c791c131b41c00200050004400460020bb38c11cb2940020004100630072006f0062006100740020bc0f002000410064006f00620065002000520065006100640065007200200035002e00300020c774c0c1c5d0c11c0020c5f40020c2180020c788c2b5b2c8b2e4002e> /NLD (Gebruik deze instellingen om Adobe PDF-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. De gemaakte PDF-documenten kunnen worden geopend met Acrobat en Adobe Reader 5.0 en hoger.) /NOR <FEFF004200720075006b00200064006900730073006500200069006e006e007300740069006c006c0069006e00670065006e0065002000740069006c002000e50020006f0070007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e00740065007200200066006f00720020007500740073006b00720069006600740020006100760020006800f800790020006b00760061006c00690074006500740020007000e500200062006f007200640073006b0072006900760065007200200065006c006c00650072002000700072006f006f006600650072002e0020005000440046002d0064006f006b0075006d0065006e00740065006e00650020006b0061006e002000e50070006e00650073002000690020004100630072006f00620061007400200065006c006c00650072002000410064006f00620065002000520065006100640065007200200035002e003000200065006c006c00650072002000730065006e006500720065002e> /PTB <FEFF005500740069006c0069007a006500200065007300730061007300200063006f006e00660069006700750072006100e700f50065007300200064006500200066006f0072006d00610020006100200063007200690061007200200064006f00630075006d0065006e0074006f0073002000410064006f0062006500200050004400460020007000610072006100200069006d0070007200650073007300f5006500730020006400650020007100750061006c0069006400610064006500200065006d00200069006d00700072006500730073006f0072006100730020006400650073006b0074006f00700020006500200064006900730070006f00730069007400690076006f0073002000640065002000700072006f00760061002e0020004f007300200064006f00630075006d0065006e0074006f00730020005000440046002000630072006900610064006f007300200070006f00640065006d0020007300650072002000610062006500720074006f007300200063006f006d0020006f0020004100630072006f006200610074002000650020006f002000410064006f00620065002000520065006100640065007200200035002e0030002000650020007600650072007300f50065007300200070006f00730074006500720069006f007200650073002e> /SUO <FEFF004b00e40079007400e40020006e00e40069007400e4002000610073006500740075006b007300690061002c0020006b0075006e0020006c0075006f0074002000410064006f0062006500200050004400460020002d0064006f006b0075006d0065006e007400740065006a00610020006c0061006100640075006b006100730074006100200074007900f6007000f60079007400e400740075006c006f0073007400750073007400610020006a00610020007600650064006f007300740075007300740061002000760061007200740065006e002e00200020004c0075006f0064007500740020005000440046002d0064006f006b0075006d0065006e00740069007400200076006f0069006400610061006e0020006100760061007400610020004100630072006f0062006100740069006c006c00610020006a0061002000410064006f00620065002000520065006100640065007200200035002e0030003a006c006c00610020006a006100200075007500640065006d006d0069006c006c0061002e> /SVE <FEFF0041006e007600e4006e00640020006400650020006800e4007200200069006e0073007400e4006c006c006e0069006e006700610072006e00610020006f006d002000640075002000760069006c006c00200073006b006100700061002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e00740020006600f600720020006b00760061006c00690074006500740073007500740073006b0072006900660074006500720020007000e5002000760061006e006c00690067006100200073006b0072006900760061007200650020006f006300680020006600f600720020006b006f007200720065006b007400750072002e002000200053006b006100700061006400650020005000440046002d0064006f006b0075006d0065006e00740020006b0061006e002000f600700070006e00610073002000690020004100630072006f0062006100740020006f00630068002000410064006f00620065002000520065006100640065007200200035002e00300020006f00630068002000730065006e006100720065002e> /ENU (Use these settings to create Adobe PDF documents for quality printing on desktop printers and proofers. Created PDF documents can be opened with Acrobat and Adobe Reader 5.0 and later.) /Namespace [ (Adobe) (Common) (1.0) /OtherNamespaces [ /AsReaderSpreads false /CropImagesToFrames true /ErrorControl /WarnAndContinue /FlattenerIgnoreSpreadOverrides false /IncludeGuidesGrids false /IncludeNonPrinting false /IncludeSlug false /Namespace [ (Adobe) (InDesign) (4.0) /OmitPlacedBitmaps false /OmitPlacedEPS false /OmitPlacedPDF false /SimulateOverprint /Legacy /AddBleedMarks false /AddColorBars false /AddCropMarks false /AddPageInfo false /AddRegMarks false /ConvertColors /NoConversion /DestinationProfileName () /DestinationProfileSelector /NA /Downsample16BitImages true /FlattenerPreset << /PresetSelector /MediumResolution /FormElements false /GenerateStructure true /IncludeBookmarks false /IncludeHyperlinks false /IncludeInteractive false /IncludeLayers false /IncludeProfiles true /MultimediaHandling /UseObjectSettings /Namespace [ (Adobe) (CreativeSuite) (2.0) /PDFXOutputIntentProfileSelector /NA /PreserveEditing true /UntaggedCMYKHandling /LeaveUntagged /UntaggedRGBHandling /LeaveUntagged /UseDocumentBleed false>> setdistillerparams /HWResolution [2400 2400] /PageSize [612.000 792.000]>> setpagedevice</s>
|
<s>Bangla Word Prediction and Sentence Completion Using GRU: An Extended Version of RNN on N-gram Language Model2019 International Conference on Sustainable Technologies for Industry 4.0 (STI), 24-25 December, Dhaka 978-1-7281-6099-3/19/$31.00 ©2019 IEEE Bangla Word Prediction and Sentence Completion Using GRU: An Extended Version of RNN on N-gram Language Model Omor Faruk Rakib, Shahinur Akter, Md Azim Khan, Amit Kumar Das, Khan Mohammad Habibullah Department of Computer Science and Engineering East-West University Dhaka, Bangladesh Email: rakib1001@gmail.com, shahinur.shoron@gmail.com, azimkhantonmoy@gmail.com, amit.csedu@gmail.com, h.rana10@ewubd.edu Abstract— Textual information exchange, by typing the information and send it to the other end, is one of the most prominent mediums of communication throughout the world. People occupy a lot of time sending emails or additional information on social networking sites where typing the whole information is redundant and time-consuming in this advanced era. To make textual information exchange more speedy and easier, word predictive systems are launched which can predict the next most likely word so that people do not have to type the next word but select it from the suggested words. In this study, we have proposed a method that can predict the next most appropriate and suitable word in Bangla language, and also it can suggest the corresponding sentence to contribute to this technology of word prediction systems. This proposed approach is, using GRU (Gated Recurrent Unit) based RNN (Recurrent Neural Network) on n-gram dataset to create such language models that can predict the word(s) from the input sequence provided. We have used a corpus dataset, collected from different sources in Bangla language to run the experiments. Compared to the other methods that have been used such as LSTM (Long Short Term Memory) based RNN on n-gram dataset and Naïve Bayes with Latent Semantic Analysis, our proposed approach gives better performance. It gives an average accuracy of 99.70% for 5-gram model, 99.24% for 4-gram model, 95.84% for Tri-gram model, 78.15%, and 32.17% respectively for Bi-gram and Uni-gram models on average. Keywords—word prediction, sentence suggestion, Bangla language, GRU based RNN, n-gram language model I. INTRODUCTION To communicate with each other in this modern era, we use different devices to type down the textual information and send it to the other end, but it is redundant to type the whole text again and again, and also it requires time to type the entire text. In that case, the textual word prediction system can ease our life where it will predict the next plausible word, and we could pick those most likely predicted words instead typing those. The problem of guessing which words are expected to follow a sequence of words or a given segment of text is called word prediction [3]. It is an ‘intelligent’ feature that can alleviate writing breakdowns by merely reducing the number of keystrokes necessary for typing words [5]. The higher the number of keystrokes that the user saves, the better the performance as it reduces both times and attempts for producing a text [5]. In a computer program when the user inputs a</s>
|
<s>word or multiple words, it will present a list of possible words for that particular input, and also when the intended word appears in the list, users can click it, and that will insert the word into the document [2]. Many people in the world are physically, perceptively or cognitively challenged and are slow typists. These types of people can live a comfortable life if they have the scope of typing anything easily being aided through automatic sentence completion technique by the process of word prediction. Also, word prediction can help early learners like students or novice researchers to make fewer spelling errors and enhance the speed of typing. Considering the welfare of word prediction system, several research works have been done where the word prediction system was implemented using different methods to reduce the time required to type and make life easier. Among them, P. Barman and A. Boruah has mentioned in their research work [1] that they have used LSTM (Long Short Term Memory) with RNN (Recurrent Neural Network) to predict the next possible word in Assamese Language and got 88.20 % accuracy for Assamese text and 72.10 % accuracy for phonetically transcript Assamese language. Again, authors of paper [4] proposed a model that can significantly predict most desired words while typing, and their research has experimented on personal emails, call-center emails, weather reports and cooking recipes. They have developed an evaluation metric and adapted N-gram language models to predict subsequent words that require an accurate performance metric rather than the customary performance metric. We have studied many techniques to predict the next words of a sentence in different languages, especially for the English Language, and there was little satisfactory researches on Bangla language to predict words in a sentence or to predict the whole sentence [6]. Contributing to this research field, a novel approach is proposed in this study to predict the next appropriate word (One or more) and suggest sentences in Bangla language. In our proposed approach we have used GRU based RNN on n-gram dataset to create such language models, which can predict the next most likely Bangla word(s) from a given sequence of input words. As this research work is dealing with sequential data, the next word will be predicted based on not only one word but one or several words and based on their subsequent order; hence RNN is used to train the dataset. Then again, GRU is also used to solve the vanishing gradient problem, a significant drawback of basic RNN. However, the proficiency of this model largely depends on the dataset that we are going to use, so we have tried to collect the dataset from different sources such as the daily “Prothom-Alo” newspaper [7], BBC Bangla news [8], and Bangla academic books [9]. Although there are several research works on word prediction in Bangla language using machine learning approaches, but yet it requires to upgrade the system for better accuracy, and our proposed method has improved the word prediction system</s>
|
<s>providing more accurate and efficient predictions. The overall contribution of this research work is- • As per our knowledge, no research work has been done using the same method for Bangla language that we have proposed. • This method can suggest complete sentences simultaneously with most likely next word prediction for Bangla language. • A large dataset is used to escalate the accuracy of our proposed method, which wasn’t used earlier for Bangla word prediction. • Our proposed method gives better accuracy than other methods that researchers have used. The rest of the paper is constructed as follows. Section II will discuss the related works, section III will represent Methodology including dataset overview and implementation, section IV will show the result analysis, and lastly, section V will describe the limitations and future works as the conclusion. II. RELATED WORKS In recent years, several research works have been done regarding textual word prediction system to find the most suitable and appropriate next word, where Bangla word prediction and completing a sentence is one of these research topics [21] [22] [23] [24] [25] [26]. In the paper [2], authors have considered the disabled people and students who use keyboards for searching from the websites and proposed a model that will predict next several words rather than one word to complete the sentence. For this work, they have used stochastic, i.e. N-gram modeling process. Also, they have used a large Bangla corpus of different type words to find better performance. Again to improve the communication system while texting, authors of paper [10] proposed a model that will predict and suggest grammatically more appropriate next word reducing the keystrokes required by the users. They have used a probabilistic language model based on N-gram for prediction. In the paper [11], the author represents a new approach based on context features and machine learning to predict words. This method considered the problem as a learning-classification task. By using various feature selection techniques of machine learning, SVM (Support Vector Machin) with MI, X2 and more, author trained the word predictor. The experiment was done using several datasets, and compared to other similar works, the accuracy of predicting words using this method gives 91 % correct prediction. People with disabilities may need a tool for communicating with each other. In this case, text generation is an essential activity to make the communication system easier for them. The objective of this research [12] is to upgrade the quality of a word prediction system for Brazilian Portuguese to reduce the time to write texts. In this research, authors have used the POS (Parts-of-Speech) from the previous words to predict the possible POS of the next words. The predicted words were generated from those anticipated POS and the information restrained in the lexicons. They have used the artificial neural networks, SVM, and regularized logistic models to predict word POS in Brazilian Portuguese, depending on the POS of the 1, 2, 3, or 4 previous words. Again, they have also discussed a</s>
|
<s>meta-learning strategy for algorithm selection and a fusion algorithm to combine those. After the experiments, it was concluded that this approach could correctly predict 79.95% of the words with a maximum of 28.5% hit rate. Again in paper [5], researchers have developed a model for predicting the next word of Bangla sentence using stochastic, i.e., N-gram language model such as unigram, bigram, trigram, and deleted interpolation and back-off models for auto-completing a sentence by predicting next suitable word. They have mentioned that the results of this research were promising, and it was also helpful for reducing misspelling. Another paper [13] says that the use of artificial intelligence in predicting the next word can provide a restriction-free communication and understanding. In most word prediction algorithms, it is observed that authors have used a collection of words that shows likeness and directed to a fixed linear path. But analyzing the frequency of the words with pattern recognition of machine learning and adding new words into a local dictionary can improve the overall process. Over time, it upgrades and provides accurate results and can be refined with comparison using a cloud-based dictionary. III. METHODOLOGY A. Dataset Overview We have assembled a large amount of data in Bangla language to assessment the new approach that we have proposed, and the total amount of gathered data is 170 thousand, which were collected from different sources. Table 1 shows the statistics of the data gathered from their origins. Table 1. List of Collected Dataset Data Collection Source Total Words Unique words The Daily Prothom Alo Newspaper [7] 75,000 9,686 Bangla Academic Books [8] 45,000 5,075 BBC News Bangla [9] 50,000 8,239 After collecting the dataset from different sources, we have cleaned the dataset using the cleaning function, which was implemented to remove the unwanted objects (“”, (), /,!), English, and other language words from the Bangla text dataset. This cleaning function also helped to turn the initial dataset into a standard one so that it can be used later for other purposes. Afterward, 5 different datasets were created using the idea of n-gram which are uni-gram, bi-gram, tri-gram, 4-gram and 5-gram from the cleaned dataset. Figure 1 shows the structure of dataset cleaning and created the dataset. N-gram is a language model that assigns probabilities in a sequence of words, and as the n-gram language model is a clinging series of n items of a simple text, we have used it in this study [14]. Generally, the number of input words can be different each time when we want to predict the next word or words. Whenever the input word is a single one, output will always be a single word, which is called 1-gram or uni-gram [15]; when input will be two words and output will be a single word, it is called 2-gram or bi-gram [15]. Again, when input is three words and output will be a single word, it is called 3-gram or tri-gram [15] and uniformly for 4-gram and 5-gram. From our</s>
|
<s>knowledge, n-gram language model has introduced up-to 4-gram, but we have initiated till 5-gram in this work. Because when the input length is more than 5 words, we only take the last 5 words as input and sent them to the trained 5-gram model. Usually the last 4 or 5 words are sufficient for understanding the dependency of the sequence. Now to understand better about this language model, suppose we have a Bangla sentence for instance, বাাংলা ভাষা আমাদের মাদের ভাষা। Uni-gram model presentation of this sentence is- Input, X Output, Y বাাংলা ভাষা ভাষা আমাদের আমাদের মাদের মাদের ভাষা। Bi-gram model presentation for this sentence is - Input, X Output, Y X1 X2 বাাংলা ভাষা আমাদের ভাষা আমাদের মাদের আমাদের মাদের ভাষা। Likewise, for 3-gram or tri-gram, it takes three input values and gives one corresponding output value like uni-gram and bi-gram and similarly for 4-gram, 5-gram. Figure 1: Structure of Dataset Processing B. Implementation In the basic n-gram language model, it measures the probability of possible next word, and whenever it gets all the probabilities for next possible word, it chooses the highest probability word and sets it as the next word (one or more). The next most likely value is calculated from a probabilistic view, and as we wanted to predict the next value from the input value বাাংলা ভাষা, we have calculated the highest probability of the next word using equation (1). Next word = Max (P (আমাদের | বাাংলা ভাষা, P (মাদের | বাাংলা ভাষা), P (ভাষা | বাাংলা ভাষা)) --------- (1) Along with the satisfactory contribution of the n-gram in word prediction, there is a hindrance to the n-gram language model, and that is, n-gram language model cannot deal with issues arrived for zero probability caused by the absence of the expected next word in the dataset. Hence, most often model shows zero probability and cannot suggest the next word with highest probability, and the whole method get failed to predict the most appropriate value. N-gram also works inefficiently when the dataset is vast with very long sequence of data input or a considerable amount of N. Several methods, Back-off and Katz Back-off was applied to smooth the probability distribution by tuning n-gram with small count [17]. However, we have upgraded n-gram with Neural Network to solve the problem of n-gram because Neural Network can detect the pattern of input and suggest corresponding output [18] and in this work, we are dealing with Bangla corpus data to predict the next suitable words, and also it suggests a complete sentence. Hence, considering the dataset that we have used and our aim of this research, Neural Network, as well as RNN (Recurrent Neural Network), was also used because it works better with sequential data with dependencies among themselves to upskill that 5 different datasets. RNN perpetuated state by using its’ output as input from one to the next iteration, and this task of RNN is expressed by equation (2) where u is the weight, multiplied with</s>
|
<s>the current input and w is the weight, multiplied with the previous output [19]. Figure 2 shows the structure of training the models with GRU based RNN. --------- (2) Figure 2: Architecture of training the models with GRU based RNNBut RNN also has a problem remembering the effect of the earlier layers in a long sequence, which means that even if the values of the parameters of the early layers change dramatically, the effect on the output is shallow, and this problem of RNN is studied as vanishing gradient problem. Generally, two gating mechanism is used to solve the vanishing gradient problem; LSTM (Long Short Term Memory) and GRU (Gated Recurrent Unit) which allows us to solve the vanishing gradient problem. But we have used GRU in this study because it uses two gates such as update gate and reset gate where LSTM uses three gates: input, forget, and output [16]. Again, LSTM maintains an internal memory to remember the effect of the earlier layers where GRU doesn't need any extra memory, which makes it easier to implement and also requires less time to train the dataset. Hence, it makes GRU more efficient for a medium length sequence data than LSTM. In GRU, update gate and reset gate are vectors that help to decide which information will pass through, and also they are trained to remember data from long ago without removing data that are irrelevant for prediction [17]. There are two mathematical notations, equation (3) and equation (4) of update gate and reset gate, which determines how much past data needed to be remembered and how much past data required to be forgotten [17]. In equation (3), is the update gate which is calculated for time step t where is it’s weight and is the information of previous t-1 unit, which is multiplied by . Again, is the reset gate expressed in equation (4) which is calculated for time step t where is it’s weight and is the information of previous t-1 unit which is multiplied by . --------- (3) --------- (4) As shown in Figure 2:, we used GRU based RNN to train our previously created five datasets (Uni-gram to 5-gram) and built five corresponding models and Figure 3 shows the structure of the trained models which have five hidden layers named embedding_1 (Embedding), gru_1 (GRU), gru_2 (GRU), dense_1 (Dense) and dense_2 (Dense) with total parameters of 1,681,721 (params). Figure 3: Structure of layers in our training process 1) Word Prediction After training all five datasets (Uni-gram to 5-gram), we now have 5 trained models for different length inputs. These models take different length word sequences as input and determine a single output, which is the next most likely word that should follow the input word sequence. Now, if the input word sequence length is one then it will be sent to train Uni-gram model as the model will take only one-word input and predict the most likely next word. Uniformly if the number of input words is</s>
|
<s>two then the inputted words should be sent to trained bi-gram model as it takes two input words and predicts an output word. Likewise, for the rest of the trained models. Figure 4 represents the word prediction process for different length input using 5 trained models. There is an exception if the length of the inputted word sequence is higher than five. In such cases, only the last five words would be used in the trained 5-gram model to predict the next word. Because, in general, the last 4 or 5 words is enough to establish the dependency of the sequence. Figure 4: Word prediction from the trained models 2) Sentence Prediction In our work, we not only predict the next most likely word but also suggest a full sentence from the given word sequence. To do that, we have used our previously mentioned architecture of N-gram trained model trained by GRU based RNN. When we have the input values, we can predict the next value from the given sequence and then add the output (predicted word) with the input. So that we can predict furthermore words from the newly updated input, which makes it a complete sentence eventually. This process should be continued until the end of a sentence is determined. In Bangla language, the end of a sentence is determined by the use of punctuation, “|” for the normal statement, and “?” for a questioning statement. So the model will keep predicting the word sequence until the end punctuation of the sentence is found. Thus, the total output should be the suggested possible sentence. IV. RESULT ANALYSIS To ratify the proposed approach, it’s essential to run the experiments and analyzed the outcome earnestly. Hence, we have appraised our proposed approach on a corpus dataset training the five different models having identical structures until 1000 epochs (Figure 3). Figure 5 and Figure 6 represents that, the trained Uni-gram model has an average accuracy of 32.17% and the average loss of 276.44% for our proposed approach, where the Bi-gram model has an average accuracy of 78.15% and the average loss of 53.36%. Again, Tri-gram has 95.84% accuracy on average and 8.52% loss on average for the same dataset used for Uni-gram and Bi-gram. Uniformly 4-gram and 5-gram show an average accuracy of 99.24% and 99.70% where they have an average loss of 2.04% and 1.11%, which indicates that the accuracy and loss level is improved according to the number of n is increased. Figure 5: Graphical Representation of Average Accuracy of Trained Models in Percentage against 1000 Epochs Figure 6: Graphical Representation of Average Loss of Trained Models in Percentage against 1000 Epochs We have also compared the experimental result of our proposed approach with other approaches proposed by the researchers in paper [1] & [2] and found that, paper [1] has an accuracy of 88.20%, and paper [2] has an accuracy of 63.5% on average for their proposed method, where we have a maximum efficiency of 95.84%-99.70% on average</s>
|
<s>for higher-order sequences. Figure 7 shows the comparison among the different approaches used in paper [1], paper [2] and this study. Figure 7: Comparison Chart of Average Accuracy V. CONCLUSION To predict the next most appropriate and suitable Bangla word (one or more) and sentence, GRU based RNN has shown a significant contribution to this research work. To justify the significance of using GRU based RNN, we have compared our proposed method with other methods that were used for Bangla and other languages by the researchers and got better accuracy among them (Figure 7). Although Uni-gram gives poor accuracy for our proposed work (32.17%), for higher-order sequences such as Tri-gram, 4-gram, and 5-gram, the accuracy rate is high (respectively 95.84%, 99.24%, and 99.70%). Again, the overall accuracy of this approach would be more impressive if we could use a larger dataset that we have already used in this work. Using Bangla corpus dataset was challenging as there is no readymade dataset for Bangla language, and we had to collect the dataset from different sources. In coming times, we will try to collect a large dataset to get better performance on GRU based RNN for Bangla next word and sentence prediction. Furthermore, this study will help as a tool for sustainable technologies in industry as its application is vast and it can be used in different sectors. VI. REFERENCES [1] P. P. Barman and A. Boruah, “A RNN based Approach for next word prediction in Assamese Phonetic Transcription,” Procedia Comput. Sci., vol. 143, pp. 117–123, 2018. [2] M. T. Habib, A. Al-Mamun, M. S. Rahman, S. M. T. Siddiquee, and F. Ahmed, “An Exploratory Approach to Find a Novel Metric Based Optimum Language Model for Automatic Bangla Word Prediction,” Int. J. Intell. Syst. Appl., vol. 10, no. 2, pp. 47–54, Feb. 2018. [3] “What is Word Prediction?” [Online]. Available: http://www2.edc.org/ncip/library/wp/What_is.htm. [Accessed: 03-Aug-2019]. [4] S. Bickel, P. Haider, and T. Scheffer, “Predicting sentences using N-gram language models,” in Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing - HLT ’05, 2005, pp. 193–200.R. Nicole, “Title of paper with only first word capitalized,” J. Name Stand. Abbrev., in press. [5] M. M. Haque, M. T. Habib, and M. M. Rahman, “AUTOMATED WORD PREDICTION IN BANGLA LANGUAGE USING STOCHASTIC LANGUAGE MODELS,” Int. J. Found. Comput. Sci. Technol., vol. 5, no. 6, 2015. [6] R. Makkar, M. Kaur, and D. V. Sharma, “Word Prediction Systems: A Survey,” Adv. Comput. Sci. Inf. Technol., vol. 2, no. 2, pp. 177–180. [7] “Prothom Alo | Latest online Bangla world news bd | Sports photo video live.” [Online]. Available: https://www.prothomalo.com/. [Accessed: 25-Aug-2019]. [8] “খবর, সববদেষ খবর, ব্রেক াং কিউজ, কবদেষণ - BBC News বাাংলা.” [Online]. Available: https://www.bbc.com/bengali. [Accessed: 25-Aug-2019]. [9] “Bangla Academy Sangkhipto Bangla Avidhan (Bengali to Bengali Dictionary) ~ Free Download Bangla Books, Bangla Magazine, Bengali PDF Books, New Bangla Books.” [Online]. Available: https://www.gobanglabooks.com/2017/08/bangla-academy-sangkhipo-bangla-avidhan.html. [Accessed: 25-Aug-2019]. [10] Dumbali, J & Nagaraja Rao, A. (2019). Real-time word prediction using N-grams model. International Journal</s>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.