text
stringlengths 41
31.4k
|
|---|
<s>Similar and nearest works are done in the field of the spelling checker. Mentionable works are done for detection and correction of non-word error. However, as stated earlier, there are no significant research works done in the field of the context-sensitive or semantic level error. B. B. Chaudhuri has used an approximate string matching algorithm to detect non-word error [8]. Direct dictionary lookup method is used by N. UzZaman and M. Khan [9] for misspelt word error and by Abdullah and Rahman [10] to detect the typographical and cognitive phonetic error. P. Mandal and B. M. M. Hossain [11] proposed a method based on PAM clustering algorithm and this method also did not deal with the semantic error. A few works are done at the semantic level. N. Hossain [12] have introduced with a model which used n-gram to check whether a word is a word or not in sentence level. K. M. Hasan, M. Hozaifa and S. Dutta [13] have developed a rule-based method which detects the grammatical semantic error in a simple sentence. In this research, we are going to develop a method to detect and correct a semantic level error like typographical and homophone error. We are the very first to detect multiple semantic errors in a sentence with the help of Naïve Bayes Classifier. III. PROPOSED METHOD Processing Bangla language is not so easy. We are going to use the Naïve Bayes classification. We use a confusion set of words which will be made with an edit distance algorithm. Every 185confuses word work as a single class. Then we will use conditional probability which will give us a score. The score calculated using conditional probability will help us to decide whether a word is appropriate or not and find the expected word. We also use Laplace Smoothing to get a better result. IV. METHODOLOGIES To build the proposed method we need to follow some step which will help us to achieve our goal. The steps are • Collection of data • Data preprocessing • Extraction of the confused word list • Applied Naïve Bayes theorem • Declaration error and suggestion A. Collection of data To evaluate any framework, lots of data are required. In NLP data is an important fact to justify any method. More data will help us to prove how good a method works. That is why we have justified our method by considerably large data. We have collected data from the web, from the newspapers which are available online, from many blogs etc. We store them in a different file. The entire data corpus contains many types of data like political, fictional, sports, entertainments etc. B. Data preprocessing When we have collected data, these were not in the format which is needed for our method. So we need a preprocessing to get the expected format. We write python code to remove unnecessary sign and symbol. We also remove the emoji. Then we break them into sentences using Bengali punctuation rules and store them</s>
|
<s>in a file. From these processed data we collect the unique words which we have used as our dictionary. We also collect words for our dictionary manually. In our method, we are going to generate a set of confusion words for the target word. We preprocess the confused word by using our dictionary which is the collection of unique word. For each word, we try to bring the words which can be possible confused words. We apply the edit distance algorithm to create the set. We did not pick out confused word set for stop words. In our method occurrences of words in a sentence, occurrences of a word with its neighbor words etc. are going to be used for the purpose of calculating the probability. That is why we also preprocess the occurrences of a word for the corpus and create a corpus which is the collection of counts of a word with others words. C. Extraction of the confused word list In this step, we take all unique words from our dictionary and for every unique word we extract its confusing word set by the help of edit distance algorithm. Edit distance algorithm is a way to find the minimum number of operations to transfer a string to another string; where operations are the insertion, deletion and replacement. It is also known as Levenshtein distance. We use the minimum distance 2. If any word takes less or equal to 2 operations to transform our target word we take this as a confuse word corresponding of a target word. We have created a collection of the confused words set as a file for every unique word in advance. Then, we extract the confused word set for the target word. We take a sentence as input. We have assumed that there has no non-word error in our input sentence. From the input sentence, for every word we try to obtain a list of confused words by searching in our corpus which is the collection of list of confused words. Let’s our input sentence is consisting of n words. Input sentence, IS={W1,W2,W3,……, Wn-1,Wn} (1) For ith word Wi we will find a confused word set. If the number of confused words is m, then the set of confused words, SCW, is For Wi word SCW={cw1, cw2,cw3,……, cwm-1,cwm} (2) Where cwj will be the jth confused word from the SCW (means set of the confused word). Since our target word can generate a confused word list, it means that there is a chance of occurrences of semantic error due to deletion error or insertion error. D. Applied Naïve Bayes theorem In this step, we would find a list of confused words. If we have no confused words list, then we can declare the target word as error free. However, in the case of confused words list, we follow some procedure to decide the result. Here we are going to use Naïve Bayes classifier for deciding which confuse word is going to fit</s>
|
<s>our sentence. Naïve Bayes classifier is a model to classify in machine learning which is based on Bayes theorem. Bayes theorem simply follows the conditional probability as described in the following. For a Wi from IS, Bayes theorem can be written in a modified way P(Wi|W1W2…Wi-1Wi+1…Wn ) P(W1|Wi)P(W2|Wi)…P(Wi-1|Wi)P(Wi+1|Wi)…P(Wn|Wi)P(Wi)P(W1)P(W2)…P(Wi-1)P(Wi+1)…P(Wn)(3) Equation (3) can be written as P(Wi|W1W2…Wi-1Wi+1…Wn) ∝ P(Wi|W1W2…Wi-1Wi+1…Wn)P(Wi) P(Wi|W1W2…Wi-1Wi+1…Wn) ∝ P(Wi)∏ P(Wk)∏ P(Wk)k=nk=i+1k=i-1k=1 (4) In (4), Wi will be replaced by every confuse word cwj from the SCW because Wi is our target word. Here, Wi is our target word and others are used as feature words. For feature word, it will not be good work to take all words as feature words from the sentence. When the distance is increased from the target word the semantic relation also 186becomes weak with the neighbor word. So, to avoid weak semantic relation we set the neighbor distance 5 from the target word. We extract the target word features from the left side of the target word between distance 5 and from the right side of the target word. Now for every cwj from SCW the features set will be FS={fw1, fw2, …, fwz}; 1<=z<=8 Now we will take fwl which is lth word in our features set FS and try to count occurrences of the fwl word in our corpus with cwj word. We will also count how many sentences contain the word cwj from the total number of sentences in our corpus. And these counts will help to calculate the probability. What happens if we don’t find any occurrences of any features words with the confuse word cwj. The probability will zero and which is not good for us. Cause we calculate probability from our corpus. It is sure that our corpus did not contain all possible sentence. That means there have chances that feature occurrences could be possible with word cwj which is not in our corpus. To solve this problem, we use Laplace Smoothing to avoid this type of error. Laplace Smoothing is a way to smooth classified data and also known as additive smoothing. And the Laplace Smoothing equation is given by P(fwl)=counts(fwl)+αcounts(cwj)+α.counts(unique words) (5) Where counts (fwl) is the occurrences of fw with cwj ; counts(cwj) is the total number of words in all sentences in corpus which contain the cwj words; counts (unique words) is the total number of unique words in the corpus; And α is always 1. α 0 means no smoothing. E. Declaration error and suggestions After applying Naïve theorem and Laplace smoothing, we get the probability for all confused words from the SCW. And we can declare the target as the error or not on the basis of calculated probability. The word with higher probability will be the most appropriate word on that place and with less probability will be the less appropriate for that place. If our target word is not the highest probability word it will be declared as the error. If we assume every cwj word from SCW as</s>
|
<s>a class, then we can represent Naïve Bayes classifier as argmaxSCWP(SCW|FS) =argmaxSCWP(FS|SCW)P(SCW) =argmaxSCW ∏ P(fw|SCW)P(SCW)fw ϵ FS (6) After the declaration of the error, the extracted confused word set will be provided as suggestion list. The SCW will be sorted by their probability. The word with maximum probability will be the top on the suggestion the list. Figure 1 presents the whole process. Following Fig 1 will represent the whole process. Figure 1. Flow Chart of Proposed Method V. EVALUATION To evaluate our proposed method, we build our own corpus. We build corpus by collecting data from the web. We have built 4 corpora to test our model. Data are formatted as line by following the rules of punctuations of Bengali Language. Then we take every sentence from the corpus as input and remove the stop words from the sentence. Then we evaluate our sentence through our model by Naïve Bayes Classifiers. First, we have trained our model by using our collected data as a training set. Then we have injected error on our testing data corpus on the purpose of evaluation. Errors are injected to corpus randomly by using a written program. Because in practice it is quite difficult to collect the semantic error. Following Tables II-V are described with some incorrect sentence. The tables contain the suggestion words with their score. Since we decide to take 187edit distance 2 to generate confusion set. That’s why there has so many suggestions word and we take the number of words to show that our expected word is in the suggestion list. The following table shows the evaluation for the sentence: পুিলেশর ( িল) েখেয় বেকর (যুবেকর)মৃতুয্ TABLE II. FOR TARGET WORD (DELETION ERROR) Suggested word Score িল 1.7627185390517586e-20 গাছ 3.409311006362797e-21 গিত 1.5746065463027899e-21 গড়া 1.9268956217462368e-22 ল 3.517054829170595e-23 ন 3.508881970897087e-23 1.7681087832799074e-23 TABLE III. FOR TARGET WORD বেকর (DELETION ERROR) Suggested word Score যুবেকর 1.5688367185562306e-22 বােকর 1.43723710964709e-22 বুেকর 7.026839026529137e-23 পদেকর 7.025022836560187e-23 বেরর 3.518419284740286e-23 বেকর 1.7681087832799074e-23 Following table are show the evaluation for the sentence: সবাi বাল (ভাল) aকােজর (কােজর) জনয্ েনকী পােব TABLE IV. FOR TARGET WORD বাল (REPLACED ERROR) Suggested word Score েফল 2.28311850932232e-27 ফজল 2.2816419573851163e-27 খাi 2.2805352953946707e-27 ভাল 2.2797978785685965e-27 বাল 5.722590488655557e-28 TABLE V. FOR TARGET WORD aকােজর (ADDITION ERROR) Suggested word Score কােজ 1.0793053263947006e-24 কােজর 1.9437926723830688e-25 কােজi 1.641408995548399e-25 কােলর 1.2209853377939654e-26 aকােজর 5.722590488655557e-28 In some case, our expected word too below to our list. It happens for lack of data. More frequently happens words go top of the list. This a limitation of our current method. The overcome technique is mentioned in the part of the conclusion. Another reason is when there has more than one error in a sentence, during the processing of one error another error is taken as context error which has an effect in processing error. But when user will select the correct word for the sentence than another error will get correct context word which will minimize method error. We have used four corpora which contain a total of 28057 sentences to test our model. We have</s>
|
<s>gained 90% accuracy on average in our trained corpus. Table II shows the entire performance outcome on testing data. TABLE VI. PERFORMANCE ON TRAINING DATA Name of the Dataset No of Sentence No of Error Word No of Detected word as Error Accuracy Corpus 1 7115 6160 5609 91.05% Corpus 2 8156 7124 6356 89.21% Corpus 3 8038 6702 6089 90.85% Corpus 4 4748 4001 3603 90.05% Total 28057 23987 21657 90.28% The accuracy can be gone down if data are not found in the occurrences corpus. Since there have lots of data, it may happen that our collected corpus does not contain the target word. Also, it can happen that we did not find the confusion set in our method where in practice there exists confusion set for the corresponding target word. That is why we use Laplace Smoothing to handle the absence of the word in our occurrences corpus. Since there has no available rich open source corpus in Bangla we can’t evaluate our method against other corpora. Also as far as our knowledge, in Bangla existing systems like Avro and others, they do not handle semantic error seriously, they are specialized at spell checking. That’s why we are not providing a comparison with other existing systems. VI. CONCLUSION In this study, we have put effort to solve the typographical error and homophone error, which destroy the context of a sentence in the Bengali language. We have solved not only single error but also more than one errors in a sentence. Though we have achieved a good level of accuracy, there exists challenges and scope to improve. Because when multiple errors arise, it increases the time and space complexity which is not a good characteristic of a model. In future, we will try to develop a stop words corpus and try to tf-idf (term frequency and inverse document frequency) which will provide the better influence of context word. Also, a method will be developed in future to decrease the time and space complexity. ACKNOWLEDGEMENT This research is supported by The Institute for Energy, Environment, Research and Development (IEERD), University of Asia Pacific (UAP). REFERENCES [1] K. Kukich, “Techniques for automatically correcting words in text,” ACM Computing Surveys, 24 (4), page 377 - 439, 1992. [2] Golding and Andrew, “A Bayesian hybrid method for contextsensitive spelling correction,” arXiv preprint cmp-lg/9606001, pp 1-15 (1996). 188[3] M. Kim, S. K. Choi and H. C. Kwon, “Context Sensitive Spelling Error Correction Using Inter Word Semantic Relation Analysis,” 2014 International Conference on Information Science & Applications (ICISA), page 1-4, 2014. [4] A. J. Islam and D. Inkpen, “Semantic text similarity using corpus-based word similarity and string similarity,” ACM Transactions on Knowledge Discovery from Data, vol. 2, no.2, pp. 1–25, 2008. [5] A. Islam and D. Inkpen, “Real-word spelling correction using Google web 1T 3-grams,” Proceedings of International Conference on Natural Language Processing and Knowledge Engineering, vol. 3, pp. 1241–1249, 2009. [6] Y. Bassill and M. Alwani1, “Context-sensitive Spelling Correction Using Google Web 1T</s>
|
<s>5-Gram Information” Computer and Information Science, Vol. 5, No. 3, May 2012 [7] K. W. Church and W. A. Gale, “A spelling correction program based on a noisy channel model,” COLING '90 Proceedings of the 13th conference on Computational linguistics – Volume 2, Pages 205-210,1990. [8] B. B. Chaudhuri, “Reversed word dictionary and phonetically similar word grouping based spell-checker to Bangla text”, Proc. LESAL Workshop, Mumbai, 2001. [9] N. UzZaman and M. Khan, “A comprehensive bangla spelling checker,” In the Proceeding of the International Conference on Computer Processing on Bengali (ICCPB), Dhaka, Bangladesh, 2006. [10] A. Abdullah and A. Rahman, “A generic spell checker engine for south asian languages,” Conference on Software Engineering and Applications (SEA 2003), 2003, pp. 3–5. [11] P. Mandal and B. M. M. Hossain, “Clustering-based Bangla Spell Checker,” 2017 IEEE International Conference on Imaging, Vision & Pattern Recognition (icIVPR), Page 1 – 6, 2017 [12] N. H. Khan, G. C. Saha, B. Sarker and M. H. Rahman, “Checking the correctness of Bangla words using n-gram,” International Journal of Computer Application, vol. 89, no. 11, 2014. [13] K. M. A. Hasan, M. Hozaifa and S. Dutta, “ Detection of Semantic Errors from Simple Bangla Sentences,” 2014 17th International Conference on Computer and Information Technology (ICCIT), pages 296 – 299, 201. 189 /ASCII85EncodePages false /AllowTransparency false /AutoPositionEPSFiles true /AutoRotatePages /None /Binding /Left /CalGrayProfile (Gray Gamma 2.2) /CalRGBProfile (sRGB IEC61966-2.1) /CalCMYKProfile (U.S. Web Coated \050SWOP\051 v2) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Error /CompatibilityLevel 1.7 /CompressObjects /Off /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages true /CreateJobTicket false /DefaultRenderingIntent /Default /DetectBlends true /DetectCurves 0.0000 /ColorConversionStrategy /LeaveColorUnchanged /DoThumbnails false /EmbedAllFonts true /EmbedOpenType false /ParseICCProfilesInComments true /EmbedJobOptions true /DSCReportingLevel 0 /EmitDSCWarnings false /EndPage -1 /ImageMemory 1048576 /LockDistillerParams true /MaxSubsetPct 100 /Optimize true /OPM 0 /ParseDSCComments false /ParseDSCCommentsForDocInfo true /PreserveCopyPage true /PreserveDICMYKValues true /PreserveEPSInfo false /PreserveFlatness true /PreserveHalftoneInfo true /PreserveOPIComments false /PreserveOverprintSettings true /StartPage 1 /SubsetFonts true /TransferFunctionInfo /Remove /UCRandBGInfo /Preserve /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true /AbadiMT-CondensedLight /ACaslon-Italic /ACaslon-Regular /ACaslon-Semibold /ACaslon-SemiboldItalic /AdobeArabic-Bold /AdobeArabic-BoldItalic /AdobeArabic-Italic /AdobeArabic-Regular /AdobeHebrew-Bold /AdobeHebrew-BoldItalic /AdobeHebrew-Italic /AdobeHebrew-Regular /AdobeHeitiStd-Regular /AdobeMingStd-Light /AdobeMyungjoStd-Medium /AdobePiStd /AdobeSongStd-Light /AdobeThai-Bold /AdobeThai-BoldItalic /AdobeThai-Italic /AdobeThai-Regular /AGaramond-Bold /AGaramond-BoldItalic /AGaramond-Italic /AGaramond-Regular /AGaramond-Semibold /AGaramond-SemiboldItalic /AgencyFB-Bold /AgencyFB-Reg /AGOldFace-Outline /AharoniBold /Algerian /Americana /Americana-ExtraBold /AndaleMono /AndaleMonoIPA /AngsanaNew /AngsanaNew-Bold /AngsanaNew-BoldItalic /AngsanaNew-Italic /AngsanaUPC /AngsanaUPC-Bold /AngsanaUPC-BoldItalic /AngsanaUPC-Italic /Anna /ArialAlternative /ArialAlternativeSymbol /Arial-Black /Arial-BlackItalic /Arial-BoldItalicMT /Arial-BoldMT /Arial-ItalicMT /ArialMT /ArialMT-Black /ArialNarrow /ArialNarrow-Bold /ArialNarrow-BoldItalic /ArialNarrow-Italic /ArialRoundedMTBold /ArialUnicodeMS /ArrusBT-Bold /ArrusBT-BoldItalic /ArrusBT-Italic /ArrusBT-Roman /AvantGarde-Book /AvantGarde-BookOblique /AvantGarde-Demi /AvantGarde-DemiOblique /AvantGardeITCbyBT-Book /AvantGardeITCbyBT-BookOblique /BakerSignet /BankGothicBT-Medium /Barmeno-Bold /Barmeno-ExtraBold /Barmeno-Medium /Barmeno-Regular /Baskerville /BaskervilleBE-Italic /BaskervilleBE-Medium /BaskervilleBE-MediumItalic /BaskervilleBE-Regular /Baskerville-Bold /Baskerville-BoldItalic /Baskerville-Italic /BaskOldFace /Batang /BatangChe /Bauhaus93 /Bellevue /BellMT /BellMTBold /BellMTItalic /BerlingAntiqua-Bold /BerlingAntiqua-BoldItalic /BerlingAntiqua-Italic /BerlingAntiqua-Roman /BerlinSansFB-Bold /BerlinSansFBDemi-Bold /BerlinSansFB-Reg /BernardMT-Condensed /BernhardModernBT-Bold /BernhardModernBT-BoldItalic /BernhardModernBT-Italic /BernhardModernBT-Roman /BiffoMT /BinnerD /BinnerGothic /BlackadderITC-Regular /Blackoak /blex /blsy /Bodoni /Bodoni-Bold /Bodoni-BoldItalic /Bodoni-Italic /BodoniMT /BodoniMTBlack /BodoniMTBlack-Italic /BodoniMT-Bold /BodoniMT-BoldItalic /BodoniMTCondensed /BodoniMTCondensed-Bold /BodoniMTCondensed-BoldItalic /BodoniMTCondensed-Italic /BodoniMT-Italic /BodoniMTPosterCompressed /Bodoni-Poster /Bodoni-PosterCompressed /BookAntiqua /BookAntiqua-Bold /BookAntiqua-BoldItalic /BookAntiqua-Italic /Bookman-Demi /Bookman-DemiItalic /Bookman-Light /Bookman-LightItalic /BookmanOldStyle /BookmanOldStyle-Bold /BookmanOldStyle-BoldItalic /BookmanOldStyle-Italic /BookshelfSymbolOne-Regular /BookshelfSymbolSeven /BookshelfSymbolThree-Regular /BookshelfSymbolTwo-Regular /Botanical /Boton-Italic /Boton-Medium /Boton-MediumItalic /Boton-Regular /Boulevard /BradleyHandITC /Braggadocio /BritannicBold /Broadway /BrowalliaNew /BrowalliaNew-Bold /BrowalliaNew-BoldItalic /BrowalliaNew-Italic /BrowalliaUPC /BrowalliaUPC-Bold /BrowalliaUPC-BoldItalic /BrowalliaUPC-Italic /BrushScript /BrushScriptMT /CaflischScript-Bold /CaflischScript-Regular /Calibri /Calibri-Bold /Calibri-BoldItalic /Calibri-Italic /CalifornianFB-Bold /CalifornianFB-Italic /CalifornianFB-Reg /CalisMTBol /CalistoMT /CalistoMT-BoldItalic /CalistoMT-Italic /Cambria</s>
|
<s>/Cambria-Bold /Cambria-BoldItalic /Cambria-Italic /CambriaMath /Candara /Candara-Bold /Candara-BoldItalic /Candara-Italic /Carta /CaslonOpenfaceBT-Regular /Castellar /CastellarMT /Centaur /Centaur-Italic /Century /CenturyGothic /CenturyGothic-Bold /CenturyGothic-BoldItalic /CenturyGothic-Italic /CenturySchL-Bold /CenturySchL-BoldItal /CenturySchL-Ital /CenturySchL-Roma /CenturySchoolbook /CenturySchoolbook-Bold /CenturySchoolbook-BoldItalic /CenturySchoolbook-Italic /CGTimes-Bold /CGTimes-BoldItalic /CGTimes-Italic /CGTimes-Regular /CharterBT-Bold /CharterBT-BoldItalic /CharterBT-Italic /CharterBT-Roman /CheltenhamITCbyBT-Bold /CheltenhamITCbyBT-BoldItalic /CheltenhamITCbyBT-Book /CheltenhamITCbyBT-BookItalic /Chiller-Regular /Cmb10 /CMB10 /Cmbsy10 /CMBSY10 /CMBSY5 /CMBSY6 /CMBSY7 /CMBSY8 /CMBSY9 /Cmbx10 /CMBX10 /Cmbx12 /CMBX12 /Cmbx5 /CMBX5 /Cmbx6 /CMBX6 /Cmbx7 /CMBX7 /Cmbx8 /CMBX8 /Cmbx9 /CMBX9 /Cmbxsl10 /CMBXSL10 /Cmbxti10 /CMBXTI10 /Cmcsc10 /CMCSC10 /Cmcsc8 /CMCSC8 /Cmcsc9 /CMCSC9 /Cmdunh10 /CMDUNH10 /Cmex10 /CMEX10 /CMEX7 /CMEX8 /CMEX9 /Cmff10 /CMFF10 /Cmfi10 /CMFI10 /Cmfib8 /CMFIB8 /Cminch /CMINCH /Cmitt10 /CMITT10 /Cmmi10 /CMMI10 /Cmmi12 /CMMI12 /Cmmi5 /CMMI5 /Cmmi6 /CMMI6 /Cmmi7 /CMMI7 /Cmmi8 /CMMI8 /Cmmi9 /CMMI9 /Cmmib10 /CMMIB10 /CMMIB5 /CMMIB6 /CMMIB7 /CMMIB8 /CMMIB9 /Cmr10 /CMR10 /Cmr12 /CMR12 /Cmr17 /CMR17 /Cmr5 /CMR5 /Cmr6 /CMR6 /Cmr7 /CMR7 /Cmr8 /CMR8 /Cmr9 /CMR9 /Cmsl10 /CMSL10 /Cmsl12 /CMSL12 /Cmsl8 /CMSL8 /Cmsl9 /CMSL9 /Cmsltt10 /CMSLTT10 /Cmss10 /CMSS10 /Cmss12 /CMSS12 /Cmss17 /CMSS17 /Cmss8 /CMSS8 /Cmss9 /CMSS9 /Cmssbx10 /CMSSBX10 /Cmssdc10 /CMSSDC10 /Cmssi10 /CMSSI10 /Cmssi12 /CMSSI12 /Cmssi17 /CMSSI17 /Cmssi8 /CMSSI8 /Cmssi9 /CMSSI9 /Cmssq8 /CMSSQ8 /Cmssqi8 /CMSSQI8 /Cmsy10 /CMSY10 /Cmsy5 /CMSY5 /Cmsy6 /CMSY6 /Cmsy7 /CMSY7 /Cmsy8 /CMSY8 /Cmsy9 /CMSY9 /Cmtcsc10 /CMTCSC10 /Cmtex10 /CMTEX10 /Cmtex8 /CMTEX8 /Cmtex9 /CMTEX9 /Cmti10 /CMTI10 /Cmti12 /CMTI12 /Cmti7 /CMTI7 /Cmti8 /CMTI8 /Cmti9 /CMTI9 /Cmtt10 /CMTT10 /Cmtt12 /CMTT12 /Cmtt8 /CMTT8 /Cmtt9 /CMTT9 /Cmu10 /CMU10 /Cmvtt10 /CMVTT10 /ColonnaMT /Colossalis-Bold /ComicSansMS /ComicSansMS-Bold /Consolas /Consolas-Bold /Consolas-BoldItalic /Consolas-Italic /Constantia /Constantia-Bold /Constantia-BoldItalic /Constantia-Italic /CooperBlack /CopperplateGothic-Bold /CopperplateGothic-Light /Copperplate-ThirtyThreeBC /Corbel /Corbel-Bold /Corbel-BoldItalic /Corbel-Italic /CordiaNew /CordiaNew-Bold /CordiaNew-BoldItalic /CordiaNew-Italic /CordiaUPC /CordiaUPC-Bold /CordiaUPC-BoldItalic /CordiaUPC-Italic /Courier /Courier-Bold /Courier-BoldOblique /CourierNewPS-BoldItalicMT /CourierNewPS-BoldMT /CourierNewPS-ItalicMT /CourierNewPSMT /Courier-Oblique /CourierStd /CourierStd-Bold /CourierStd-BoldOblique /CourierStd-Oblique /CourierX-Bold /CourierX-BoldOblique /CourierX-Oblique /CourierX-Regular /CreepyRegular /CurlzMT /David-Bold /David-Reg /DavidTransparent /Dcb10 /Dcbx10 /Dcbxsl10 /Dcbxti10 /Dccsc10 /Dcitt10 /Dcr10 /Desdemona /DilleniaUPC /DilleniaUPCBold /DilleniaUPCBoldItalic /DilleniaUPCItalic /Dingbats /DomCasual /Dotum /DotumChe /DoulosSIL /EdwardianScriptITC /Elephant-Italic /Elephant-Regular /EngraversGothicBT-Regular /EngraversMT /EraserDust /ErasITC-Bold /ErasITC-Demi /ErasITC-Light /ErasITC-Medium /ErieBlackPSMT /ErieLightPSMT /EriePSMT /EstrangeloEdessa /Euclid /Euclid-Bold /Euclid-BoldItalic /EuclidExtra /EuclidExtra-Bold /EuclidFraktur /EuclidFraktur-Bold /Euclid-Italic /EuclidMathOne /EuclidMathOne-Bold /EuclidMathTwo /EuclidMathTwo-Bold /EuclidSymbol /EuclidSymbol-Bold /EuclidSymbol-BoldItalic /EuclidSymbol-Italic /EucrosiaUPC /EucrosiaUPCBold /EucrosiaUPCBoldItalic /EucrosiaUPCItalic /EUEX10 /EUEX7 /EUEX8 /EUEX9 /EUFB10 /EUFB5 /EUFB7 /EUFM10 /EUFM5 /EUFM7 /EURB10 /EURB5 /EURB7 /EURM10 /EURM5 /EURM7 /EuroMono-Bold /EuroMono-BoldItalic /EuroMono-Italic /EuroMono-Regular /EuroSans-Bold /EuroSans-BoldItalic /EuroSans-Italic /EuroSans-Regular /EuroSerif-Bold /EuroSerif-BoldItalic /EuroSerif-Italic /EuroSerif-Regular /EUSB10 /EUSB5 /EUSB7 /EUSM10 /EUSM5 /EUSM7 /FelixTitlingMT /Fences /FencesPlain /FigaroMT /FixedMiriamTransparent /FootlightMTLight /Formata-Italic /Formata-Medium /Formata-MediumItalic /Formata-Regular /ForteMT /FranklinGothic-Book /FranklinGothic-BookItalic /FranklinGothic-Demi /FranklinGothic-DemiCond /FranklinGothic-DemiItalic /FranklinGothic-Heavy /FranklinGothic-HeavyItalic /FranklinGothicITCbyBT-Book /FranklinGothicITCbyBT-BookItal /FranklinGothicITCbyBT-Demi /FranklinGothicITCbyBT-DemiItal /FranklinGothic-Medium /FranklinGothic-MediumCond /FranklinGothic-MediumItalic /FrankRuehl /FreesiaUPC /FreesiaUPCBold /FreesiaUPCBoldItalic /FreesiaUPCItalic /FreestyleScript-Regular /FrenchScriptMT /Frutiger-Black /Frutiger-BlackCn /Frutiger-BlackItalic /Frutiger-Bold /Frutiger-BoldCn /Frutiger-BoldItalic /Frutiger-Cn /Frutiger-ExtraBlackCn /Frutiger-Italic /Frutiger-Light /Frutiger-LightCn /Frutiger-LightItalic /Frutiger-Roman /Frutiger-UltraBlack /Futura-Bold /Futura-BoldOblique /Futura-Book /Futura-BookOblique /FuturaBT-Bold /FuturaBT-BoldItalic /FuturaBT-Book /FuturaBT-BookItalic /FuturaBT-Medium /FuturaBT-MediumItalic /Futura-Light /Futura-LightOblique /GalliardITCbyBT-Bold /GalliardITCbyBT-BoldItalic /GalliardITCbyBT-Italic /GalliardITCbyBT-Roman /Garamond /Garamond-Bold /Garamond-BoldCondensed /Garamond-BoldCondensedItalic /Garamond-BoldItalic /Garamond-BookCondensed /Garamond-BookCondensedItalic /Garamond-Italic /Garamond-LightCondensed /Garamond-LightCondensedItalic /Gautami /GeometricSlab703BT-Light /GeometricSlab703BT-LightItalic /Georgia /Georgia-Bold /Georgia-BoldItalic /Georgia-Italic /GeorgiaRef /Giddyup /Giddyup-Thangs /Gigi-Regular /GillSans /GillSans-Bold /GillSans-BoldItalic /GillSans-Condensed /GillSans-CondensedBold /GillSans-Italic /GillSans-Light /GillSans-LightItalic /GillSansMT /GillSansMT-Bold /GillSansMT-BoldItalic /GillSansMT-Condensed /GillSansMT-ExtraCondensedBold /GillSansMT-Italic /GillSans-UltraBold /GillSans-UltraBoldCondensed /GloucesterMT-ExtraCondensed /Gothic-Thirteen /GoudyOldStyleBT-Bold /GoudyOldStyleBT-BoldItalic /GoudyOldStyleBT-Italic /GoudyOldStyleBT-Roman /GoudyOldStyleT-Bold /GoudyOldStyleT-Italic /GoudyOldStyleT-Regular /GoudyStout /GoudyTextMT-LombardicCapitals /GSIDefaultSymbols /Gulim /GulimChe /Gungsuh /GungsuhChe /Haettenschweiler /HarlowSolid /Harrington /Helvetica /Helvetica-Black /Helvetica-BlackOblique /Helvetica-Bold /Helvetica-BoldOblique /Helvetica-Condensed /Helvetica-Condensed-Black /Helvetica-Condensed-BlackObl /Helvetica-Condensed-Bold /Helvetica-Condensed-BoldObl /Helvetica-Condensed-Light /Helvetica-Condensed-LightObl /Helvetica-Condensed-Oblique /Helvetica-Fraction /Helvetica-Narrow /Helvetica-Narrow-Bold /Helvetica-Narrow-BoldOblique /Helvetica-Narrow-Oblique /Helvetica-Oblique /HighTowerText-Italic /HighTowerText-Reg /Humanist521BT-BoldCondensed /Humanist521BT-Light /Humanist521BT-LightItalic /Humanist521BT-RomanCondensed /Imago-ExtraBold /Impact /ImprintMT-Shadow /InformalRoman-Regular /IrisUPC /IrisUPCBold /IrisUPCBoldItalic /IrisUPCItalic /Ironwood /ItcEras-Medium /ItcKabel-Bold /ItcKabel-Book /ItcKabel-Demi /ItcKabel-Medium /ItcKabel-Ultra /JasmineUPC</s>
|
<s>/JasmineUPC-Bold /JasmineUPC-BoldItalic /JasmineUPC-Italic /JoannaMT /JoannaMT-Italic /Jokerman-Regular /JuiceITC-Regular /Kartika /Kaufmann /KaufmannBT-Bold /KaufmannBT-Regular /KidTYPEPaint /KinoMT /KodchiangUPC /KodchiangUPC-Bold /KodchiangUPC-BoldItalic /KodchiangUPC-Italic /KorinnaITCbyBT-Regular /KristenITC-Regular /KrutiDev040Bold /KrutiDev040BoldItalic /KrutiDev040Condensed /KrutiDev040Italic /KrutiDev040Thin /KrutiDev040Wide /KrutiDev060 /KrutiDev060Bold /KrutiDev060BoldItalic /KrutiDev060Condensed /KrutiDev060Italic /KrutiDev060Thin /KrutiDev060Wide /KrutiDev070 /KrutiDev070Condensed /KrutiDev070Italic /KrutiDev070Thin /KrutiDev070Wide /KrutiDev080 /KrutiDev080Condensed /KrutiDev080Italic /KrutiDev080Wide /KrutiDev090 /KrutiDev090Bold /KrutiDev090BoldItalic /KrutiDev090Condensed /KrutiDev090Italic /KrutiDev090Thin /KrutiDev090Wide /KrutiDev100 /KrutiDev100Bold /KrutiDev100BoldItalic /KrutiDev100Condensed /KrutiDev100Italic /KrutiDev100Thin /KrutiDev100Wide /KrutiDev120 /KrutiDev120Condensed /KrutiDev120Thin /KrutiDev120Wide /KrutiDev130 /KrutiDev130Condensed /KrutiDev130Thin /KrutiDev130Wide /KunstlerScript /Latha /LatinWide /LetterGothic /LetterGothic-Bold /LetterGothic-BoldOblique /LetterGothic-BoldSlanted /LetterGothicMT /LetterGothicMT-Bold /LetterGothicMT-BoldOblique /LetterGothicMT-Oblique /LetterGothic-Slanted /LevenimMT /LevenimMTBold /LilyUPC /LilyUPCBold /LilyUPCBoldItalic /LilyUPCItalic /Lithos-Black /Lithos-Regular /LotusWPBox-Roman /LotusWPIcon-Roman /LotusWPIntA-Roman /LotusWPIntB-Roman /LotusWPType-Roman /LucidaBright /LucidaBright-Demi /LucidaBright-DemiItalic /LucidaBright-Italic /LucidaCalligraphy-Italic /LucidaConsole /LucidaFax /LucidaFax-Demi /LucidaFax-DemiItalic /LucidaFax-Italic /LucidaHandwriting-Italic /LucidaSans /LucidaSans-Demi /LucidaSans-DemiItalic /LucidaSans-Italic /LucidaSans-Typewriter /LucidaSans-TypewriterBold /LucidaSans-TypewriterBoldOblique /LucidaSans-TypewriterOblique /LucidaSansUnicode /Lydian /Magneto-Bold /MaiandraGD-Regular /Mangal-Regular /Map-Symbols /MathA /MathB /MathC /Mathematica1 /Mathematica1-Bold /Mathematica1Mono /Mathematica1Mono-Bold /Mathematica2 /Mathematica2-Bold /Mathematica2Mono /Mathematica2Mono-Bold /Mathematica3 /Mathematica3-Bold /Mathematica3Mono /Mathematica3Mono-Bold /Mathematica4 /Mathematica4-Bold /Mathematica4Mono /Mathematica4Mono-Bold /Mathematica5 /Mathematica5-Bold /Mathematica5Mono /Mathematica5Mono-Bold /Mathematica6 /Mathematica6Bold /Mathematica6Mono /Mathematica6MonoBold /Mathematica7 /Mathematica7Bold /Mathematica7Mono /Mathematica7MonoBold /MatisseITC-Regular /MaturaMTScriptCapitals /Mesquite /Mezz-Black /Mezz-Regular /MICR /MicrosoftSansSerif /MingLiU /Minion-BoldCondensed /Minion-BoldCondensedItalic /Minion-Condensed /Minion-CondensedItalic /Minion-Ornaments /MinionPro-Bold /MinionPro-BoldIt /MinionPro-It /MinionPro-Regular /Miriam /MiriamFixed /MiriamTransparent /Mistral /Modern-Regular /MonotypeCorsiva /MonotypeSorts /MSAM10 /MSAM5 /MSAM6 /MSAM7 /MSAM8 /MSAM9 /MSBM10 /MSBM5 /MSBM6 /MSBM7 /MSBM8 /MSBM9 /MS-Gothic /MSHei /MSLineDrawPSMT /MS-Mincho /MSOutlook /MS-PGothic /MS-PMincho /MSReference1 /MSReference2 /MSReferenceSansSerif /MSReferenceSansSerif-Bold /MSReferenceSansSerif-BoldItalic /MSReferenceSansSerif-Italic /MSReferenceSerif /MSReferenceSerif-Bold /MSReferenceSerif-BoldItalic /MSReferenceSerif-Italic /MSReferenceSpecialty /MSSong /MS-UIGothic /MT-Extra /MTExtraTiger /MT-Symbol /MT-Symbol-Italic /MVBoli /Myriad-Bold /Myriad-BoldItalic /Myriad-Italic /Myriad-Roman /Narkisim /NewCenturySchlbk-Bold /NewCenturySchlbk-BoldItalic /NewCenturySchlbk-Italic /NewCenturySchlbk-Roman /NewMilleniumSchlbk-BoldItalicSH /NewsGothic /NewsGothic-Bold /NewsGothicBT-Bold /NewsGothicBT-BoldItalic /NewsGothicBT-Italic /NewsGothicBT-Roman /NewsGothic-Condensed /NewsGothic-Italic /NewsGothicMT /NewsGothicMT-Bold /NewsGothicMT-Italic /NiagaraEngraved-Reg /NiagaraSolid-Reg /NimbusMonL-Bold /NimbusMonL-BoldObli /NimbusMonL-Regu /NimbusMonL-ReguObli /NimbusRomNo9L-Medi /NimbusRomNo9L-MediItal /NimbusRomNo9L-Regu /NimbusRomNo9L-ReguItal /NimbusSanL-Bold /NimbusSanL-BoldCond /NimbusSanL-BoldCondItal /NimbusSanL-BoldItal /NimbusSanL-Regu /NimbusSanL-ReguCond /NimbusSanL-ReguCondItal /NimbusSanL-ReguItal /Nimrod /Nimrod-Bold /Nimrod-BoldItalic /Nimrod-Italic /NSimSun /Nueva-BoldExtended /Nueva-BoldExtendedItalic /Nueva-Italic /Nueva-Roman /NuptialScript /OCRA /OCRA-Alternate /OCRAExtended /OCRB /OCRB-Alternate /OfficinaSans-Bold /OfficinaSans-BoldItalic /OfficinaSans-Book /OfficinaSans-BookItalic /OfficinaSerif-Bold /OfficinaSerif-BoldItalic /OfficinaSerif-Book /OfficinaSerif-BookItalic /OldEnglishTextMT /Onyx /OnyxBT-Regular /OzHandicraftBT-Roman /PalaceScriptMT /Palatino-Bold /Palatino-BoldItalic /Palatino-Italic /PalatinoLinotype-Bold /PalatinoLinotype-BoldItalic /PalatinoLinotype-Italic /PalatinoLinotype-Roman /Palatino-Roman /PapyrusPlain /Papyrus-Regular /Parchment-Regular /Parisian /ParkAvenue /Penumbra-SemiboldFlare /Penumbra-SemiboldSans /Penumbra-SemiboldSerif /PepitaMT /Perpetua /Perpetua-Bold /Perpetua-BoldItalic /Perpetua-Italic /PerpetuaTitlingMT-Bold /PerpetuaTitlingMT-Light /PhotinaCasualBlack /Playbill /PMingLiU /Poetica-SuppOrnaments /PoorRichard-Regular /PopplLaudatio-Italic /PopplLaudatio-Medium /PopplLaudatio-MediumItalic /PopplLaudatio-Regular /PrestigeElite /Pristina-Regular /PTBarnumBT-Regular /Raavi /RageItalic /Ravie /RefSpecialty /Ribbon131BT-Bold /Rockwell /Rockwell-Bold /Rockwell-BoldItalic /Rockwell-Condensed /Rockwell-CondensedBold /Rockwell-ExtraBold /Rockwell-Italic /Rockwell-Light /Rockwell-LightItalic /Rod /RodTransparent /RunicMT-Condensed /Sanvito-Light /Sanvito-Roman /ScriptC /ScriptMTBold /SegoeUI /SegoeUI-Bold /SegoeUI-BoldItalic /SegoeUI-Italic /Serpentine-BoldOblique /ShelleyVolanteBT-Regular /ShowcardGothic-Reg /Shruti /SILDoulosIPA /SimHei /SimSun /SimSun-PUA /SnapITC-Regular /StandardSymL /Stencil /StoneSans /StoneSans-Bold /StoneSans-BoldItalic /StoneSans-Italic /StoneSans-Semibold /StoneSans-SemiboldItalic /Stop /Swiss721BT-BlackExtended /Sylfaen /Symbol /SymbolMT /SymbolTiger /SymbolTigerExpert /Tahoma /Tahoma-Bold /Tci1 /Tci1Bold /Tci1BoldItalic /Tci1Italic /Tci2 /Tci2Bold /Tci2BoldItalic /Tci2Italic /Tci3 /Tci3Bold /Tci3BoldItalic /Tci3Italic /Tci4 /Tci4Bold /Tci4BoldItalic /Tci4Italic /TechnicalItalic /TechnicalPlain /Tekton /Tekton-Bold /TektonMM /Tempo-HeavyCondensed /Tempo-HeavyCondensedItalic /TempusSansITC /Tiger /TigerExpert /Times-Bold /Times-BoldItalic /Times-BoldItalicOsF /Times-BoldSC /Times-ExtraBold /Times-Italic /Times-ItalicOsF /TimesNewRomanMT-ExtraBold /TimesNewRomanPS-BoldItalicMT /TimesNewRomanPS-BoldMT /TimesNewRomanPS-ItalicMT /TimesNewRomanPSMT /Times-Roman /Times-RomanSC /Trajan-Bold /Trebuchet-BoldItalic /TrebuchetMS /TrebuchetMS-Bold /TrebuchetMS-Italic /Tunga-Regular /TwCenMT-Bold /TwCenMT-BoldItalic /TwCenMT-Condensed /TwCenMT-CondensedBold /TwCenMT-CondensedExtraBold /TwCenMT-CondensedMedium /TwCenMT-Italic /TwCenMT-Regular /Univers-Bold /Univers-BoldItalic /UniversCondensed-Bold /UniversCondensed-BoldItalic /UniversCondensed-Medium /UniversCondensed-MediumItalic /Univers-Medium /Univers-MediumItalic /URWBookmanL-DemiBold /URWBookmanL-DemiBoldItal /URWBookmanL-Ligh /URWBookmanL-LighItal /URWChanceryL-MediItal /URWGothicL-Book /URWGothicL-BookObli /URWGothicL-Demi /URWGothicL-DemiObli /URWPalladioL-Bold /URWPalladioL-BoldItal /URWPalladioL-Ital /URWPalladioL-Roma /USPSBarCode /VAGRounded-Black /VAGRounded-Bold /VAGRounded-Light /VAGRounded-Thin /Verdana /Verdana-Bold /Verdana-BoldItalic /Verdana-Italic /VerdanaRef /VinerHandITC /Viva-BoldExtraExtended /Vivaldii /Viva-LightCondensed /Viva-Regular /VladimirScript /Vrinda /Webdings /Westminster /Willow /Wingdings2 /Wingdings3 /Wingdings-Regular /WNCYB10 /WNCYI10 /WNCYR10 /WNCYSC10 /WNCYSS10 /WoodtypeOrnaments-One /WoodtypeOrnaments-Two /WP-ArabicScriptSihafa /WP-ArabicSihafa /WP-BoxDrawing /WP-CyrillicA /WP-CyrillicB /WP-GreekCentury /WP-GreekCourier /WP-GreekHelve /WP-HebrewDavid /WP-IconicSymbolsA /WP-IconicSymbolsB /WP-Japanese /WP-MathA /WP-MathB /WP-MathExtendedA /WP-MathExtendedB /WP-MultinationalAHelve /WP-MultinationalARoman /WP-MultinationalBCourier /WP-MultinationalBHelve /WP-MultinationalBRoman /WP-MultinationalCourier /WP-Phonetic /WPTypographicSymbols /XYATIP10 /XYBSQL10 /XYBTIP10 /XYCIRC10 /XYCMAT10 /XYCMBT10 /XYDASH10 /XYEUAT10 /XYEUBT10 /ZapfChancery-MediumItalic /ZapfDingbats /ZapfHumanist601BT-Bold /ZapfHumanist601BT-BoldItalic</s>
|
<s>/ZapfHumanist601BT-Demi /ZapfHumanist601BT-DemiItalic /ZapfHumanist601BT-Italic /ZapfHumanist601BT-Roman /ZWAdobeF /NeverEmbed [ true /AntiAliasColorImages false /CropColorImages true /ColorImageMinResolution 150 /ColorImageMinResolutionPolicy /OK /DownsampleColorImages true /ColorImageDownsampleType /Bicubic /ColorImageResolution 300 /ColorImageDepth -1 /ColorImageMinDownsampleDepth 1 /ColorImageDownsampleThreshold 2.00333 /EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages true /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /ColorImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 150 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 2.00333 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /GrayImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 1200 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 600 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.00167 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1 /AllowPSXObjects false /CheckCompliance [ /None /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXOutputIntentProfile (None) /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False /CreateJDFFile false /Description << /ARA <FEFF06270633062A062E062F0645002006470630064700200627064406250639062F0627062F0627062A002006440625064606340627062100200648062B062706260642002000410064006F00620065002000500044004600200645062A064806270641064206290020064506390020064506420627064A064A0633002006390631063600200648063706280627063906290020062706440648062B0627062606420020062706440645062A062F062706480644062900200641064A00200645062C062706440627062A002006270644062306390645062706440020062706440645062E062A064406410629061B0020064A06450643064600200641062A062D00200648062B0627062606420020005000440046002006270644064506460634062306290020062806270633062A062E062F062706450020004100630072006F0062006100740020064800410064006F006200650020005200650061006400650072002006250635062F0627063100200035002E0030002006480627064406250635062F062706310627062A0020062706440623062D062F062B002E> /CHS <FEFF4f7f75288fd94e9b8bbe5b9a521b5efa7684002000410064006f006200650020005000440046002065876863900275284e8e55464e1a65876863768467e5770b548c62535370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c676562535f00521b5efa768400200050004400460020658768633002> /CHT <FEFF4f7f752890194e9b8a2d7f6e5efa7acb7684002000410064006f006200650020005000440046002065874ef69069752865bc666e901a554652d965874ef6768467e5770b548c52175370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c4f86958b555f5df25efa7acb76840020005000440046002065874ef63002> /CZE <FEFF005400610074006f0020006e006100730074006100760065006e00ed00200070006f0075017e0069006a007400650020006b0020007600790074007600e101590065006e00ed00200064006f006b0075006d0065006e0074016f002000410064006f006200650020005000440046002000760068006f0064006e00fd00630068002000700072006f002000730070006f006c00650068006c0069007600e90020007a006f006200720061007a006f007600e1006e00ed002000610020007400690073006b0020006f006200630068006f0064006e00ed0063006800200064006f006b0075006d0065006e0074016f002e002000200056007900740076006f01590065006e00e900200064006f006b0075006d0065006e007400790020005000440046002000620075006400650020006d006f017e006e00e90020006f007400650076015900ed007400200076002000700072006f006700720061006d0065006300680020004100630072006f00620061007400200061002000410064006f00620065002000520065006100640065007200200035002e0030002000610020006e006f0076011b006a016100ed00630068002e> /DAN <FEFF004200720075006700200069006e0064007300740069006c006c0069006e006700650072006e0065002000740069006c0020006100740020006f007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400650072002c0020006400650072002000650067006e006500720020007300690067002000740069006c00200064006500740061006c006a006500720065007400200073006b00e60072006d007600690073006e0069006e00670020006f00670020007500640073006b007200690076006e0069006e006700200061006600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020004400650020006f007000720065007400740065006400650020005000440046002d0064006f006b0075006d0065006e0074006500720020006b0061006e002000e50062006e00650073002000690020004100630072006f00620061007400200065006c006c006500720020004100630072006f006200610074002000520065006100640065007200200035002e00300020006f00670020006e0079006500720065002e> /DEU <FEFF00560065007200770065006e00640065006e0020005300690065002000640069006500730065002000450069006e007300740065006c006c0075006e00670065006e0020007a0075006d002000450072007300740065006c006c0065006e00200076006f006e002000410064006f006200650020005000440046002d0044006f006b0075006d0065006e00740065006e002c00200075006d002000650069006e00650020007a0075007600650072006c00e40073007300690067006500200041006e007a006500690067006500200075006e00640020004100750073006700610062006500200076006f006e00200047006500730063006800e40066007400730064006f006b0075006d0065006e00740065006e0020007a0075002000650072007a00690065006c0065006e002e00200044006900650020005000440046002d0044006f006b0075006d0065006e007400650020006b00f6006e006e0065006e0020006d006900740020004100630072006f00620061007400200075006e0064002000520065006100640065007200200035002e003000200075006e00640020006800f600680065007200200067006500f600660066006e00650074002000770065007200640065006e002e> /ESP <FEFF005500740069006c0069006300650020006500730074006100200063006f006e0066006900670075007200610063006900f3006e0020007000610072006100200063007200650061007200200064006f00630075006d0065006e0074006f0073002000640065002000410064006f00620065002000500044004600200061006400650063007500610064006f007300200070006100720061002000760069007300750061006c0069007a00610063006900f3006e0020006500200069006d0070007200650073006900f3006e00200064006500200063006f006e006600690061006e007a006100200064006500200064006f00630075006d0065006e0074006f007300200063006f006d00650072006300690061006c00650073002e002000530065002000700075006500640065006e00200061006200720069007200200064006f00630075006d0065006e0074006f00730020005000440046002000630072006500610064006f007300200063006f006e0020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e003000200079002000760065007200730069006f006e0065007300200070006f00730074006500720069006f007200650073002e> /FRA <FEFF005500740069006c006900730065007a00200063006500730020006f007000740069006f006e00730020006100660069006e00200064006500200063007200e900650072002000640065007300200064006f00630075006d0065006e00740073002000410064006f006200650020005000440046002000700072006f00660065007300730069006f006e006e0065006c007300200066006900610062006c0065007300200070006f007500720020006c0061002000760069007300750061006c00690073006100740069006f006e0020006500740020006c00270069006d007000720065007300730069006f006e002e0020004c0065007300200064006f00630075006d0065006e00740073002000500044004600200063007200e900e90073002000700065007500760065006e0074002000ea0074007200650020006f007500760065007200740073002000640061006e00730020004100630072006f006200610074002c002000610069006e00730069002000710075002700410064006f00620065002000520065006100640065007200200035002e0030002000650074002000760065007200730069006f006e007300200075006c007400e90072006900650075007200650073002e> /GRE <FEFF03a703c103b703c303b903bc03bf03c003bf03b903ae03c303c403b5002003b103c503c403ad03c2002003c403b903c2002003c103c503b803bc03af03c303b503b903c2002003b303b903b1002003bd03b1002003b403b703bc03b903bf03c503c103b303ae03c303b503c403b5002003ad03b303b303c103b103c603b1002000410064006f006200650020005000440046002003ba03b103c403ac03bb03bb03b703bb03b1002003b303b903b1002003b103be03b903cc03c003b903c303c403b7002003c003c103bf03b203bf03bb03ae002003ba03b103b9002003b503ba03c403cd03c003c903c303b7002003b503c003b903c703b503b903c103b703bc03b103c403b903ba03ce03bd002003b503b303b303c103ac03c603c903bd002e0020002003a403b10020005000440046002003ad03b303b303c103b103c603b1002003c003bf03c5002003ad03c703b503c403b5002003b403b703bc03b903bf03c503c103b303ae03c303b503b9002003bc03c003bf03c103bf03cd03bd002003bd03b1002003b103bd03bf03b903c703c403bf03cd03bd002003bc03b5002003c403bf0020004100630072006f006200610074002c002003c403bf002000410064006f00620065002000520065006100640065007200200035002e0030002003ba03b103b9002003bc03b503c403b103b303b503bd03ad03c303c403b503c103b503c2002003b503ba03b403cc03c303b503b903c2002e> /HEB <FEFF05D405E905EA05DE05E905D5002005D105D405D205D305E805D505EA002005D005DC05D4002005DB05D305D9002005DC05D905E605D505E8002005DE05E105DE05DB05D9002000410064006F006200650020005000440046002005E205D105D505E8002005D405E605D205D4002005D505D405D305E405E105D4002005D005DE05D905E005D4002005E905DC002005DE05E105DE05DB05D905DD002005E205E105E705D905D905DD002E002005DE05E105DE05DB05D90020005000440046002005E905E005D505E605E805D5002005E005D905EA05E005D905DD002005DC05E405EA05D905D705D4002005D105D005DE05E605E205D505EA0020004100630072006F006200610074002005D5002D00410064006F00620065002000520065006100640065007200200035002E0030002005D505D205E805E105D005D505EA002005DE05EA05E705D305DE05D505EA002005D905D505EA05E8002E05D905D505EA05E8002E002D0033002C002005E205D905D905E005D5002005D105DE05D305E805D905DA002005DC05DE05E905EA05DE05E9002005E905DC0020004100630072006F006200610074002E002005DE05E105DE05DB05D90020005000440046002005E905E005D505E605E805D5002005E005D905EA05E005D905DD002005DC05E405EA05D905D705D4002005D105D005DE05E605E205D505EA0020004100630072006F006200610074002005D5002D00410064006F00620065002000520065006100640065007200200035002E0030002005D505D205E805E105D005D505EA002005DE05EA05E705D305DE05D505EA002005D905D505EA05E8002E> /HRV (Za stvaranje Adobe PDF dokumenata pogodnih za pouzdani prikaz i ispis poslovnih dokumenata koristite ove postavke. Stvoreni PDF dokumenti mogu se otvoriti Acrobat i Adobe Reader 5.0 i kasnijim verzijama.) /HUN <FEFF00410020006800690076006100740061006c006f007300200064006f006b0075006d0065006e00740075006d006f006b0020006d00650067006200ed007a00680061007400f30020006d0065006700740065006b0069006e007400e9007300e900720065002000e900730020006e0079006f006d00740061007400e1007300e10072006100200073007a00e1006e0074002000410064006f00620065002000500044004600200064006f006b0075006d0065006e00740075006d006f006b0061007400200065007a0065006b006b0065006c0020006100200062006500e1006c006c00ed007400e10073006f006b006b0061006c00200068006f007a006800610074006a00610020006c00e9007400720065002e0020002000410020006c00e90074007200650068006f007a006f00740074002000500044004600200064006f006b0075006d0065006e00740075006d006f006b00200061007a0020004100630072006f006200610074002000e9007300200061007a002000410064006f00620065002000520065006100640065007200200035002e0030002c0020007600610067007900200061007a002000610074007400f3006c0020006b00e9007301510062006200690020007600650072007a006900f3006b006b0061006c0020006e00790069007400680061007400f3006b0020006d00650067002e> /ITA (Utilizzare queste impostazioni per creare documenti Adobe PDF adatti per visualizzare e stampare documenti aziendali in modo affidabile. I documenti PDF creati possono essere aperti con Acrobat e Adobe Reader 5.0 e versioni successive.) /JPN <FEFF30d330b830cd30b9658766f8306e8868793a304a3088307353705237306b90693057305f002000410064006f0062006500200050004400460020658766f8306e4f5c6210306b4f7f75283057307e305930023053306e8a2d5b9a30674f5c62103055308c305f0020005000440046002030d530a130a430eb306f3001004100630072006f0062006100740020304a30883073002000410064006f00620065002000520065006100640065007200200035002e003000204ee5964d3067958b304f30533068304c3067304d307e305930023053306e8a2d5b9a3067306f30d530a930f330c8306e57cb30818fbc307f3092884c3044307e30593002> /KOR <FEFFc7740020c124c815c7440020c0acc6a9d558c5ec0020be44c988b2c8c2a40020bb38c11cb97c0020c548c815c801c73cb85c0020bcf4ace00020c778c1c4d558b2940020b3700020ac00c7a50020c801d569d55c002000410064006f0062006500200050004400460020bb38c11cb97c0020c791c131d569b2c8b2e4002e0020c774b807ac8c0020c791c131b41c00200050004400460020bb38c11cb2940020004100630072006f0062006100740020bc0f002000410064006f00620065002000520065006100640065007200200035002e00300020c774c0c1c5d0c11c0020c5f40020c2180020c788c2b5b2c8b2e4002e> /NLD (Gebruik deze instellingen om Adobe PDF-documenten te maken waarmee zakelijke documenten betrouwbaar kunnen worden weergegeven en afgedrukt. De gemaakte PDF-documenten kunnen worden geopend met Acrobat en Adobe Reader 5.0 en hoger.) /NOR <FEFF004200720075006b00200064006900730073006500200069006e006e007300740069006c006c0069006e00670065006e0065002000740069006c002000e50020006f0070007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e00740065007200200073006f006d002000650072002000650067006e0065007400200066006f00720020007000e5006c006900740065006c006900670020007600690073006e0069006e00670020006f00670020007500740073006b007200690066007400200061007600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020005000440046002d0064006f006b0075006d0065006e00740065006e00650020006b0061006e002000e50070006e00650073002000690020004100630072006f00620061007400200065006c006c00650072002000410064006f00620065002000520065006100640065007200200035002e003000200065006c006c00650072002e> /POL <FEFF0055007300740061007700690065006e0069006100200064006f002000740077006f0072007a0065006e0069006100200064006f006b0075006d0065006e007400f300770020005000440046002000700072007a0065007a006e00610063007a006f006e00790063006800200064006f0020006e00690065007a00610077006f0064006e00650067006f002000770079015b0077006900650074006c0061006e00690061002000690020006400720075006b006f00770061006e0069006100200064006f006b0075006d0065006e007400f300770020006600690072006d006f0077007900630068002e002000200044006f006b0075006d0065006e0074007900200050004400460020006d006f017c006e00610020006f007400770069006500720061010700200077002000700072006f006700720061006d006900650020004100630072006f00620061007400200069002000410064006f00620065002000520065006100640065007200200035002e0030002000690020006e006f00770073007a0079006d002e> /PTB <FEFF005500740069006c0069007a006500200065007300730061007300200063006f006e00660069006700750072006100e700f50065007300200064006500200066006f0072006d00610020006100200063007200690061007200200064006f00630075006d0065006e0074006f0073002000410064006f00620065002000500044004600200061006400650071007500610064006f00730020007000610072006100200061002000760069007300750061006c0069007a006100e700e3006f002000650020006100200069006d0070007200650073007300e3006f00200063006f006e0066006900e1007600650069007300200064006500200064006f00630075006d0065006e0074006f007300200063006f006d0065007200630069006100690073002e0020004f007300200064006f00630075006d0065006e0074006f00730020005000440046002000630072006900610064006f007300200070006f00640065006d0020007300650072002000610062006500720074006f007300200063006f006d0020006f0020004100630072006f006200610074002000650020006f002000410064006f00620065002000520065006100640065007200200035002e0030002000650020007600650072007300f50065007300200070006f00730074006500720069006f007200650073002e> /RUM <FEFF005500740069006c0069007a00610163006900200061006300650073007400650020007300650074010300720069002000700065006e007400720075002000610020006300720065006100200064006f00630075006d0065006e00740065002000410064006f006200650020005000440046002000610064006500630076006100740065002000700065006e007400720075002000760069007a00750061006c0069007a00610072006500610020015f006900200074006900700103007200690072006500610020006c0061002000630061006c006900740061007400650020007300750070006500720069006f0061007201030020006100200064006f00630075006d0065006e00740065006c006f007200200064006500200061006600610063006500720069002e002000200044006f00630075006d0065006e00740065006c00650020005000440046002000630072006500610074006500200070006f00740020006600690020006400650073006300680069007300650020006300750020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e00300020015f00690020007600650072007300690075006e0069006c006500200075006c0074006500720069006f006100720065002e> /RUS <FEFF04180441043f043e043b044c04370443043904420435002004340430043d043d044b04350020043d0430044104420440043e0439043a043800200434043b044f00200441043e043704340430043d0438044f00200434043e043a0443043c0435043d0442043e0432002000410064006f006200650020005000440046002c0020043f043e04340445043e0434044f04490438044500200434043b044f0020043d0430043404350436043d043e0433043e0020043f0440043e0441043c043e044204400430002004380020043f04350447043004420438002004340435043b043e0432044b044500200434043e043a0443043c0435043d0442043e0432002e002000200421043e043704340430043d043d044b04350020005000440046002d0434043e043a0443043c0435043d0442044b0020043c043e0436043d043e0020043e0442043a0440044b043204300442044c002004410020043f043e043c043e0449044c044e0020004100630072006f00620061007400200438002000410064006f00620065002000520065006100640065007200200035002e00300020043800200431043e043b043504350020043f043e04370434043d043804450020043204350440044104380439002e> /SLV <FEFF005400650020006e006100730074006100760069007400760065002000750070006f0072006100620069007400650020007a00610020007500730074007600610072006a0061006e006a006500200064006f006b0075006d0065006e0074006f0076002000410064006f006200650020005000440046002c0020007000720069006d00650072006e006900680020007a00610020007a0061006e00650073006c006a00690076006f0020006f0067006c00650064006f00760061006e006a006500200069006e0020007400690073006b0061006e006a006500200070006f0073006c006f0076006e0069006800200064006f006b0075006d0065006e0074006f0076002e00200020005500730074007600610072006a0065006e006500200064006f006b0075006d0065006e0074006500200050004400460020006a00650020006d006f0067006f010d00650020006f0064007000720065007400690020007a0020004100630072006f00620061007400200069006e002000410064006f00620065002000520065006100640065007200200035002e003000200069006e0020006e006f00760065006a01610069006d002e> /SUO <FEFF004b00e40079007400e40020006e00e40069007400e4002000610073006500740075006b007300690061002c0020006b0075006e0020006c0075006f0074002000410064006f0062006500200050004400460020002d0064006f006b0075006d0065006e007400740065006a0061002c0020006a006f0074006b006100200073006f0070006900760061007400200079007200690074007900730061007300690061006b00690072006a006f006a0065006e0020006c0075006f00740065007400740061007600610061006e0020006e00e400790074007400e4006d0069007300650065006e0020006a0061002000740075006c006f007300740061006d0069007300650065006e002e0020004c0075006f0064007500740020005000440046002d0064006f006b0075006d0065006e00740069007400200076006f0069006400610061006e0020006100760061007400610020004100630072006f0062006100740069006c006c00610020006a0061002000410064006f00620065002000520065006100640065007200200035002e0030003a006c006c00610020006a006100200075007500640065006d006d0069006c006c0061002e> /SVE <FEFF0041006e007600e4006e00640020006400650020006800e4007200200069006e0073007400e4006c006c006e0069006e006700610072006e00610020006f006d002000640075002000760069006c006c00200073006b006100700061002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400200073006f006d00200070006100730073006100720020006600f60072002000740069006c006c006600f60072006c00690074006c006900670020007600690073006e0069006e00670020006f006300680020007500740073006b007200690066007400650072002000610076002000610066006600e4007200730064006f006b0075006d0065006e0074002e002000200053006b006100700061006400650020005000440046002d0064006f006b0075006d0065006e00740020006b0061006e002000f600700070006e00610073002000690020004100630072006f0062006100740020006f00630068002000410064006f00620065002000520065006100640065007200200035002e00300020006f00630068002000730065006e006100720065002e> /TUR <FEFF005400690063006100720069002000620065006c00670065006c006500720069006e0020006700fc00760065006e0069006c0069007200200062006900720020015f0065006b0069006c006400650020006700f6007200fc006e007400fc006c0065006e006d006500730069002000760065002000790061007a0064013100720131006c006d006100730131006e006100200075007900670075006e002000410064006f006200650020005000440046002000620065006c00670065006c0065007200690020006f006c0075015f007400750072006d0061006b0020006900e70069006e00200062007500200061007900610072006c0061007201310020006b0075006c006c0061006e0131006e002e00200020004f006c0075015f0074007500720075006c0061006e0020005000440046002000620065006c00670065006c0065007200690020004100630072006f006200610074002000760065002000410064006f00620065002000520065006100640065007200200035002e003000200076006500200073006f006e0072006100730131006e00640061006b00690020007300fc007200fc006d006c00650072006c00650020006100e70131006c006100620069006c00690072002e> /ENU (Use these settings to create Adobe PDF documents suitable for reliable viewing and printing of business documents. Created PDF documents can be opened with Acrobat and Adobe Reader 5.0 and later.)>> setdistillerparams /HWResolution [600 600] /PageSize [612.000 792.000]>> setpagedevice</s>
|
<s>Relational Model of Conceptual Distance between Bangla WordsRelational Model of Conceptual Distance between BanglaWords*Sibansu Mukhopadhyay1, Sreerupa Das2 and Rajkumar Roychoudhury21Department of IT&E, Govt. of WB, Society for Natural Language Technology Research,Kolkata, India; 2Indian Statistical Institute, Kolkata, IndiaABSTRACTWords in a language are related to each other. This relation is based on their conceptualproperties. (This paper avoids using the term “semantic property”, generally used by the con-temporary NLP workers for measuring distance between words, the reason being that weemploy different orientations behind the measurement of relatedness). Essentially, this workconsiders the psycho-sociological facts in the experiments, where a number of native speak-ers of Bangla manually suggests distance measurement between any two words. This workpresents a statistical approach with a psycho-analytical elaboration for measuring the concep-tual distance between words in terms of Bangla language. To be precise it calculates co-rela-tions of the assessments collected through a survey among different individuals. Aconceptual distance is used to suggest the implicit pragmatic nature of the Bangla words andit also implies an elementary taxonomy for Bangla words. As a result, the conceptual dis-tance between Bangla words in the semantic field can very usefully be quantified and thuscan be a crucial factor for a computational application like Bangla word net. Incidentally wefind that there is a very high correlation (r = 0.95) between two different sets of human judg-ments and at the same time an assuringly high correlation (r = 0.95 being the upper limit) isobserved when the respondents duplicated the same task with the same pairs of words at dif-ferent points of times. This is a pioneering study in Bangla.1. INTRODUCTIONWords in a language carry a number of conceptual properties. These proper-ties are not necessarily physical. Ordinarily, from the grammarian’s point ofview, a word seems to have some intrinsic semantic properties generatedthrough out the word formation process. However, native speakers of a*Address correspondence to: Rajkumar Roychoudhury, Indian Statistical Institute, Kolkata,India. E-mail: rajdaju@rediffmail.com© 2015 Taylor & FrancisJournal of Quantitative Linguistics, 2015Vol. 22, No. 2, 157–176, http://dx.doi.org/10.1080/09296174.2014.1001638mailto:rajdaju@rediffmail.comhttp://dx.doi.org/10.1080/09296174.2014.1001638language assign some values on a particular word. The values depend onthe social concept, the context in which the word is used and the speakers’upbringing etc.Treating word as a sign was started after Saussure’s diction on signifier–signified relation in language. According to Saussure, sign consists of twoparts, i.e. a “signifier” (signifiant) and a “signified” (signifié). Sign takes theform as signifier and the sign represents the concept as signified (DeSaussure, 1983, p. 67; De Saussure, 1974, p. 67). The relationship betweenthe signifier and the signified is referred to as “signification”. Consider aword, for example, manuS “human”. manuS is a sign, which takes a lin-guistic form (word) “manuS” as a signifier and the concept of manuS as itrepresents the signified of the signifier. Thus we follow this model of“signification” proposed by De Saussure (1983) and consider words asarbitrary signifiers loaded with social values.Most of the modern trends in linguistics consider language studies objec-tively. These trends pre-suppose language as a body. Semantics find therelationship between the signifier and the signified. For example, word as acomposition of speech</s>
|
<s>sounds means an object or a concept or any kind ofentity that may or may not exist. This is the process of signification, whichis a social phenomenon. We also consider this process as a social phenome-non, oriented to the social structure and thus signifiers (words) have somecommon, socially co-related properties underlying ontological design amongthem. For example, if we consider x as a word we can certainly relate it toa signified denoted by X here. We believe x has a set of properties acquiredby the social processes, some of which intersect with the set of properties ywhere y signifies Y. Consider Table 1 given below to illustrate the modelwe are discussing. Suppose we take three signs: x, y and z. We agree to takex for manuS “human”, y for pakhi “bird” and z for rickshaw “man-drivenvehicle”. Now follow the table.In column 1, x is a signifier and X is signified. Similarly y and z aresignifiers and Y and Z are signified in column 2 and 3 respectively. WeTable 1 Words and their conceptual ideals.1 2 3manuS ‘human’ (x) pakhi ‘bird’ (y) rickshaw ‘man driven vehicle’ (z)(X) (Y) (Z)158 S. MUKHOPADHYAY ET AL.draw an inference that establishes that x, y and z are equated with X, Y andZ respectively. We propose in this paper that every word (signifier) is insome way connected with each other. Therefore, x, y and z are intercon-nected to each other in terms of conceptualization or thought process:human and bird are animals, both have two legs and have power to producesound, etc. and the third sign z stands for a type of vehicle which is one ofthe means of transport used by humans. However, apparently it is not con-nected semantically with a bird. We propose that a bird and a rickshaw arepossibly interconnected in terms of the human thinking process. For exam-ple, a human may see a bird passing on a tree when she is riding a rick-shaw or there may be other means by which a bird can be associated with arickshaw.Linguistic entities like word, in a semantic space, share several commonpossessions through which they are inter-related, though their conceptualmeanings may differ. This commonness or relatedness can be measured inmany ways. Quantification, deploying network representation for semanticsimilarity between words in a language is now a popular trend in the stud-ies of natural language processing, cognitive science as well as psycholin-guistics. This paper tries to quantify relatedness between Bangla words withthe help of the experiments duly conducted for the relatedness between aset of selected pairs of Bangla words. In this paper a different coinage ischosen to convey our presupposition to the question of subjectivity in thedata collection procedure. We use Conceptual Relatedness (henceforth weshall use the abbreviation CR) instead of semantic distance or similarity asa central theme of this study.CR or more traditionally semantic similarity between words is a con-text dependent phenomenon. It is necessary to trace the conceptual rela-tion between words to calculate semantic similarity. Properties in thewords should be common even if we conclude that these</s>
|
<s>words aresemantically not very similar. For example, the words like Cow andHorse are not semantically similar to each other, but they are conceptu-ally related in terms of the social contexts. Everyone knows that both areconsidered as domestic animals and historically both cows and horseswere used in agriculture and transportation (Bollegala, Matsuo, &Ishizuka, 2009).It is very crucial to distinguish our aim in this project from the otherapproaches and to show why we concentrate on the terminology Concep-tual Similarity using adjective “conceptual” instead of “semantic”. In thispaper we have focused at the lexical units, as these are essentially socialRELATIONAL MODEL OF CONCEPTUAL DISTANCE BETWEEN BANGLA WORDS 159phenomena manufactured and value added by the speakers. We believethere are some idiosyncracies as well as some objective perspectives inthe language. The idiosyncrasy can be traced, at least qualitatively, by themeasurement of CR among different speakers. We also believe that aword is subjectified by a speaker”s social and psychological preferences.In our experiment among the Bangla speakers, we have done a projectionwhich effectively reviews psycho-sociological status of the individuals,albeit through quantified results. We saw that this basically depends onthe various facets of human mind. Therefore, it is more effective if oneinteracts with a social being who points out the distance between two ormore words looking around his/her habitation than looking into a pair ofwords as the dictionary-entities with their fixed semantics.2. RELATED WORKWe found (Resnik, 1995) that the occurrence of measuring semantic similar-ity in taxonomy happens among the lexical existences while reviewing theexisting literature associated with the study of semantic similarity. We foundthat the research in this area started with Quillian (1968) and later Collinsand Loftus (1975).The motivation for calculating semantic similarity arises from the factthat although words like bird and airplane are apparently closer to eachother than the pair bird and pond, the data collected from a set of peo-ple with a certain social background may suggest the opposite. Strongcorrelation between the opinions collected from two different sets ofindividuals through a psychological test gives us an idea about why thewords bird and pond are perceived as closer to each other than bird andairplane. The psychological test makes generalizations (Goldstone, 1994).To determine the probability of relatedness (the concept will beexplained later on) between two words we assume that the units of con-ceptual properties depend on the common conceptual properties of thesewords. This kind of statistical measurement helps to extract synonyms(Lin, 1998) and retrieve lexical information (Sahami & Heilman, 2006),which is necessary for making an ontological network like WordNet.To compute semantic similarity one can use the web-based metrics.Some works try to explore cognitive process using the model of priming.In these works it is believed that a speaker has an implicit memory160 S. MUKHOPADHYAY ET AL.which by certain psychological behavioural exercises links to the stimulibasically appraised by his or her experience. It is also believed thatpriming cannot be pre-supposed; it is considered as an automatic process(Sánchez-Casas et al., 2006) The relatedness between words is supposedto reflect the concept of words rather than the anticipation of a formalmeaning (Thompson-Schill et</s>
|
<s>al., 1998). In a recent work on priming ofBangla words (Dasgupta et al., 2010) a cross-modal priming experimentwas conducted to identify the mental representation and to getaccess strategies for morphologically derived words in Bangla. It isobserved that morphologically transparent words do prime each otherdespite their phonological associations. However, morphologically opaquebut phonologically transparent Bangla words do not show any primingeffect.3. CONCEPTUAL DISTANCEThere are many ways through which the relationship between words can beestablished, although all kinds of relationship are not to be considered asconceptual relations. We found a strong correlation among the respondentsrelating pairs of words. The following parameters can be considered in thisregard.3.1 Phonetic SimilarityTwo or more words can be related if they are phonetically similar. There isno reason for taking formal semantics as a tool to extract relatednessbetween these types of words. And it is not the case in conceptual relation-ships. We have seen that phonetic similarity also triggers a human’s cogni-tive ability to read or to hear words close. For example, in poetry therhyming words and alliteration relate words within a closed frame. Let usconsider the following example:gOgone gOroje megh, ghOno bOroSa.Kule Eka bose achi, nahi bhOroSa“Clouds rumbling in the sky; teeming rain.I sit on the river bank, sad and alone.”RELATIONAL MODEL OF CONCEPTUAL DISTANCE BETWEEN BANGLA WORDS 161(the literal meaning of the words “nahi bhOroSa” would be “without hope/trust”).1Note: this poem shows that the final words (“bOroSa” and “bhOroSa”)of the two sentences are phonetically similar. If we now ask some nativespeakers to calculate the distance between such phonetically similar words,the anticipated average figure would be very low, though the words aresemantically very distant.3.2 Relationship Between Synonymous WordsSynonymous words are indeed semantically similar. And this type related-ness is as usual very conceptual. Language has now, as an objective of amodern enterprise (like linguistics), a symbolic dichotomy to be claimed asan established relation between signifier and signified. But it would be falla-cious to presuppose that a signified really exists and the chain between sig-nifier and signified is viable. In other words, synonyms disprove thepresupposed mono-typical existence of a signified. For example, two ormore synonyms essentially are used for separate purposes of speech. Let usconsider some real life examples from Bangla.In Bangla there are many words signifying configuration of feminine as astandard cover term. But, native speakers use many synonymous words,elaborated below. To distinguish conceptual differences between them, wehave collected data from the dictionary of Samsad digitized by the Univer-sity of Chicago.2 A few examples are given below.� stri: a wife; a married woman; a woman� nari: a woman; womankind; a wife� lOlona: a woman; a gentlewoman, a lady; a wife. (Rarely used indaily speech, specially used in archaic poetry, or for making fun orpunning.)� mohila: a lady, a gentlewoman; a woman.� bodhu: a wife; a newly married woman; a bride; a married woman; adaughter-in-law1This is a piece from “Sonar Tari” of Rabindranath Tagore and is translated by WilliamRadice, (freely available on the internet).2We have used data from Samsad Bengali-English dictionary digitised by the University ofChicago. In a personal correspondence,</s>
|
<s>James Nye, the Bibliographer for Southern Asia, andDirector, South Asia Language and Area Centre, The University of Chicago, informed usthat there are approximately 20,950 headwords in this dictionary. See http://dsal.uchicago.edu/dictionaries/biswas-bengali/.162 S. MUKHOPADHYAY ET AL.http://dsal.uchicago.edu/dictionaries/biswas-bengali/http://dsal.uchicago.edu/dictionaries/biswas-bengali/The five words listed earlier, chosen from various terms, are used as syn-onyms for woman in Bangla. However, such dictionary entries cannotclearly distinguish any useful differences between these terms.Although these five words are semantically related to each other, we canparadigmatically change these words, as will be clear from the followingsentences and thus we can test whether these words are replaceable or not.As we stated earlier we use different synonyms for certain purposes.(1) stri jatir unnoti-i deSer unnotir prothom dhap“Advancement of Women is the first step towards the welfare of thestate.”(2) nari jatir unnoti-i deSer unnotir prothom dhap“Advancement of Women is the first step towards the welfare of thestate.”(3) narirai puruSer calika Sokti“Women are men”s guiding force.” (Here “narira” is plural of “nari”and “narijati” means only Women.)(4) lOlonarai puruSer calika Sokti“Women are men”s guiding force.”(5) je mohilaTike apni dekhchen tini aSole ei ONcOler netri“The lady you see is, in fact, a leader of this area.”(6) *je bodhuTike apni dekhchen tini aSole ei ONcOler netri(7) “The bride you see is, in fact, a leader of this area.”Among the examples, (a) and (b), (c) and (d), (e) and (f) are pairs ofsentences in which we have just replaced words with their near syn-onyms. Though these all are synonyms to each other, we cannot put aword arbitrarily without contextualizing the discursive information. Forthis reason, (d) and (f) and the following sentences are to be consideredas culturally un-authentic or inconvenient, although they are not gram-matically unacceptable sentences in terms of Bangla. Now consider thefollowing examples.(1) je nariTike apni dekhchen tini aSole ei ONcOler netri (The lady yousee is a local leader.)(2) baRite to moTe dujon thaki, ami ar amar nari (Only two people livein this house: myself and my wife.)Here “nari” is used for “lady” and wife respectively.RELATIONAL MODEL OF CONCEPTUAL DISTANCE BETWEEN BANGLA WORDS 163We cannot say that these two sentences are grammatically unacceptable,but we must say that a native speaker of Bangla cannot agree with thisexpression. We cannot even say, “bou bOr bhed na dekhe manuS khuMjedEkho” instead of “stri puruS bhed na dekhe manuS khuMje dEkho”,though the words “stri” and “bou” are semantically related. The sentencetranslated into English is “look for a human being without discriminatingbetween a man and a woman”. If we use “bou” (bride) for woman and“bOr” (husband) for man, the sentence loses its universality.There is also a chain of words, around the abstract concept of“female” in Bangla, which are co-related. For example, as Noun, thereare many, such as; nari <> stri <> rOmoni <> lOlona <> ONgona <>kamini <> bonita <> bodhu (> bou) <> mohila <> ghoSit <> konna <>magi (a plang word). There are more words indicating females whichare used normally as adjectives, for example balika (girl) <> toruni(young woman) <> briddha (old woman) <> prouRa (old woman butnot as old as briddha) <></s>
|
<s>kumari (unmarried woman) <> bibahita(married woman). Each of the above examples has a distinct set ofsemantic properties, but has also some commonness. All sets have inter-section points, through which one may establish their ontologicalrelationship. Another thing to be noted is that there is a nucleus(abstract/physical/biological/objective) meaning in the words listed above,which substantially correlates the terms.3.3 Conceptual Relation between WordsThe main purpose of this paper is to propose a hypothesis that every wordis conceptually linked up with other words, though they are not necessarilysynonyms. Moreover, whenever we look at the semantic network betweenthe words, we see that there are certain attributes socially assigned to eachof the words at any level of the concept that make a word relate to otherwords, as discussed in the introduction. Let us now show how the wordsare termed as CR, are linked to each other conceptually.The term Semantic Distance is mostly standard dictionary oriented. Inthis work, we leave the options to the speakers to calculate distancebetween words instantaneously. Therefore, this assessment procedurebecomes more intuitive than consciously determined. To summarize ourpoint of view, the use of words or use of such units of language is flexiblein the sense that it depends on the respective psycho-social factors in gener-ating conceptual properties of word. This is the reason for avoiding termlike semantic property.164 S. MUKHOPADHYAY ET AL.Let us consider here two pairs of words between which we have toestablish CR. These are; bhaSa (“language”) – bakko (“sentence”) andpaHar (“mountain”) – nowka (“boat”). Before we describe the survey meth-odology let us first try to anticipate the possible relationship between thepair of words just mentioned viz. “bhaSa-bakko” and “paHaR-nowka”. Anative speaker (in Bangla) is expected to find “bhaSa” and “bakko” to bevery close together. In fact “bakko” is considered as a part of “bhaSa” orboth may be just part of “ukti” (utterance). However, the same cannot besaid of the words “paHaR” and “nowka”. “paHaR” or mountain is a part ofnature existing from geological times and “nowka” is man-made vehicle forriver or sea transport or transportation for any water body. However, for aperson who loves nature and spends his/her holidays at mountain resorts,seaside or riverside may consider these two words not too distantly relatedas they are part of what may be called tourism.We have surveyed 30 native speakers and found that the speakers variedin their opinions and marked arbitrarily, following a scale of measurementfor CD. The details are given in Section 4. The average opinion says“language” and “sentence” are more closely related to each other than“mountain” and “boat”. Now we draw two trees, nodal distances of whichindicate the CD between the words.As argued before, from the first tree (Figure 1) for the pair bhaSa“language” – bakko “sentence” it is easy to understand that as the nodaldistance between language and sentence is very short, one concludes thatthe conceptual distance between these two words is small. On the otherhand, sentence is considered as a part of language, i.e. sentence is one ofthe members of the set called “language”.The second case which is also discussed</s>
|
<s>in some detail (Figure 2) ismore complex than the first one. What we see in the second tree is thatmountain may be anticipated under two different nodes, but for establishingHuman Com-municationLanguage Symbols MusicSound Word SentenceFig. 1. Conceptual tree for “Language” and “Sentence”.RELATIONAL MODEL OF CONCEPTUAL DISTANCE BETWEEN BANGLA WORDS 165the relationship we have considered it under the node tourism as we canconnect it to the boat more closely.It should be noted that this map is developed on the basis of a re-con-struction mode. But people generally hold the conceptual ideas according tosome predictive social pre-consciousness. For example, one surveyeethought that the distance between pakhi “bird” and rikSa “rickshaw” (amanual vehicle) is small. When a surveyee was asked how she designatedsuch closeness, she stated that on that particular day when she was comingto her office, she saw a bird, and she was in a rickshaw. Or it may be thatshe found a sticker with a picture of a bird pasted on the back of a rick-shaw. It happens quite often in Kolkata.Now consider two more examples to examine this issue with a stretchedexplanation. Consider the pairs “boat-ship” and “president-king”. Conceptu-ally, “boat” is nearer to “ship” than “president” to “king” is. On the otherhand “president” is a word nearer to “king” than a “ship”. But “king” and“ship” are also related in terms of common conceptual properties like “big”and “large” (i.e. two sets of conceptual properties of these two words inter-sect in terms of the common member of the sets). For example, we canestablish the CR between “king” and “ship” on the basis of, for instance,P1 and Q1, therefore, P1 ≈ Q1. This relation is an ontological relationship.Ontology classifies exhaustively entities of a being or a domain or aconceptualization. Depending on some informative features we can mapthe relationship between the entities and among those vast features weFig. 2. Conceptual tree in large scale.166 S. MUKHOPADHYAY ET AL.invariably find the commons between any two entities. This point leads tothe idea of an abstract process of explanation for the relations and classifi-cations of the entities or the events in a domain. Thus we can consider thisexplanation as methodological motivation to describe a domain.4. CURRENT METHODHere, we discuss some methodological parts that track the trajectory of ourwork. We basically depend on the ontological relationship (as we discussedin the previous section) between two words (as compositions of certain con-cepts). Ontological mapping is the best way to find out the CR between thewords in a language or across different languages. CR is such a function asdescribed in introduction where a set of conceptual properties within theterm of a word are assigned to another set of conceptual properties of a dif-ferent word based on their psycho-sociological expressions.Let us consider X as a family or a domain, where X1, X2, X3 and Xx arethe members (Figure 3). There are also other members like X4, X5 whomwe do not consider for conceptualization. But we know that they exist.More over it is possible that there are other members of whose existencewe are</s>
|
<s>not aware of. If we are going to present ontology of the family, wehave to describe the individual members, their positions, the relationshipbetween the members and the possible members through a diagram. Thearrows show the directions by which a human switches her thought fromone object to another.Fig. 3. Ontological pattern of conceptual relation.RELATIONAL MODEL OF CONCEPTUAL DISTANCE BETWEEN BANGLA WORDS 167Best examples for such categorization schemes are periodic tables orlibrary catalogues. Browsing a library catalogue one gets to know of high-order categorization. The cataloguing systems develop all kinds of mappingbetween the events and the conceptualization they specify. A word itself isa result of conceptualization that evolved socially. We have kept this inmind while arranging the data sheet.5. EXPERIMENTPresently, it is clear that if we have to realistically calculate the distancebetween the words we have chosen, we cannot go with any pre-supposedquantitative status for the words in a language. We have discussed earlierthat such type of pre-supposition is highly dependent on dictionary meaningof a word. Although there are several individuals who set their apparentlyrandom responses in manual calculation for measuring the distance betweentwo different words, we find that the correlation between those different indi-viduals is very strong. Moreover, a high correlation is observed when therespondents duplicated the same task at a different point of time. It impliesthat the association between two words is not really random and may not bea function of time. In the following we discuss this in some detail.Survey Procedure: Twenty pairs of Bangla words were given to 30 indi-viduals with similar academic and linguistic background. The 30 respondentswere divided into two groups, each comprising 15 members, to find out cor-relation on bias between the groups. It was assumed that the distancebetween two words can be mapped on to a set S ∈ R+, where R+ is the set ofreal positive numbers including zero. If x ∈ S, then we fixed the domain of xas 0 ≤ x ≤ 4. For example, if two words are just synonyms then ideally thedistance between them will be zero, whereas two words apparently notsemantically related to each other in anyway will be given a score 4 or there-abouts. The respondents were asked to assign the value of the distance byany rational number (up to one decimal point). To reduce the time depen-dence of CR the respondents are asked to duplicate the task at differentpoints of time. The average score by the respondents is given in Appendix I.Figure 4 depicts fragments of possible tree diagrams where we associatethe words “crow” and “cuckoo” and “rickshaw” and “bird”. The diagramsupports the survey data which indicated that “crow” and “cuckoo” aremuch closer than “rickshaw” and “bird”. In the diagram, against each nodaljoint (from where words branch out), the corresponding IC is given. Now it168 S. MUKHOPADHYAY ET AL.must be mentioned here that there is no word net available in Bengali. Forcorpus of words we relied on the University of Chicago document.The diagram implies that mentally one can distinguish between two closerational numbers; for example, 2.1</s>
|
<s>and 2.2 while putting a number againsta pair. To illustrate the scheme further let us consider the pair of words kakand bayOS (cf. Figure 4, the English meaning for both words is “crow”).These words are synonymous and it is expected that the respondent willmeasure the distance as zero or very near to zero. Now consider the words“kak” and “kokil” – “cuckoo”, though these birds don”t belong to the samespecies both these birds are common in Bengal. However “kokil” is gener-ally seen or heard only in the spring time.Also almost all residents of Bengal know that the female cuckoo lays itseggs in a crow”s nest and as the eggs are similar to those of a crow thecuckoos” off-springs are brought up by a crow family. This is another rela-tionship between these two birds. Often peoples” voices are compared toeither to that of a crow or to that of a cuckoo. They lie on the extreme ofthe spectrum of voices because while the crow”s voice is harsh and disso-nant; the cuckoo”s voice is melodious. So it is expected that the respon-dents brought up in a typical Bengal culture would make these two wordsvery close to each other.Let us now consider the following pairs of words (also cited in Figure 4)having apparently no immediate relationship: Tren “train” and rumal “hand-kerchief”.It may be that in any Western country people will measure the distancebetween them almost near to 4. But it may happen that people in India whofrequently travel by train find, more often than not, hawkers selling hand-kerchief in trains. So they may associate these two words together and maynot give the maximum score.Fig. 4. Scaling the Conceptual Distance between Words.RELATIONAL MODEL OF CONCEPTUAL DISTANCE BETWEEN BANGLA WORDS 169Though the three pairs of words mentioned earlier were not included in thesample given to the respondents there was one pair of words for which wefound interesting responses. These words were discussed earlier. The wordsare rikSa “rickshaw” and pakhi “bird” (see Figure 5). Some respondentsthought that the distance between them was small. The reason behind thismental process has been speculated earlier. However, there is a statistic whichcan measure whether the random deviation from the mean is statistically sig-nificant. This is the standard deviation, denoted by σ, which is defined by σ2 =(1/n) Σ (xi–μ) ^2, where, i = (i = 1, 2, … , n), xi (i = 1, 2, 3, … , n) are thescores by the individuals and μ is the mean given by Σxi, μ = 1/n while n is thetotal number of participants (which, in the present case, is 30).For the pair of words “rikSa” (rickshaw) and “pakhi” (bird) the mean dis-tance is 3.25 and σ = 0.22. The measure of deviation given by CV = stan-dard deviation/mean = 0.07, which is really small. It signifies that there isstrong consistency among individuals in measuring distance between a pairof words and this encourages us to speculate that the cognitive process israther uniform for this group of participants, though there may be variationfrom individual</s>
|
<s>to individual. We also found strong correlation (r = 0.95)between the two groups as far as mean distance is concerned. Moreover, anequally strong correlation (upper limit being 0.95) is observed within agroup when the participants were asked to duplicate the same task atdifferent points of time. This means the score by the participants can beconsidered to be almost time independent.Goti-probaHo-poribOHon(movable object or con-Movable natu-ral beingMovable me-chanical ele-mentMoving on thesky VehicleHuman-powered-Pakhi “bird”rikSa “rickshaw”kokil “cuckoo”Kak “crow”Fig. 5. Conceptual tree: “rickshaw” and “bird”.170 S. MUKHOPADHYAY ET AL.5.1 Semantic Similarity Based on Corpus LinguisticsTo correlate the survey data with the theoretical distance (semantic similar-ity) we follow the statistical analysis of previous authors, who worked onthis topic (Jiang & Conrath, 1997 and references there in). Here we intro-duce the idea of information content (IC). IC of a concept c can be quanti-fied as IC (c) = −log p(c) where p(c) is the probability of finding an instantof concept c. It is actually what is called entropy in information and naturalsciences. Following Richardson and Smeaton (1995), we shall define p(c) asp(c) = freq(c)/N, where, freq(c) is the number of total entries under the con-cept c from which the pair of word under consideration are derived (here weconsider the nearest node). Concrete numerical examples are given below.Figure 5 depicts fragments of possible tree diagrams where we associatethe words “crow” and “cuckoo” and “rickshaw” and “bird”. The diagramsupports the survey data which indicated that “crow” and “cuckoo” aremuch closer than “rickshaw” and “bird”. Now it must be mentioned herethat there is no WordNet available in Bengali. For corpus of words werelied on the Chicago University document and Samsad SamarthasabdaKosh.The total number of words in Samsad Samarthasabda Kosh is 62,500.Now, for example, we can measure frequencies of some pairs of words:(1) bakko “sentence” and bhaSa “language”.We have shown a relationship between bakko “sentence” and bhaSa “lan-guage”. These two words, as we have calculated a tree diagram (Figure 1)depending upon the information of Samsad samarthasabda kosh, are derivedfrom a node called “ukti” “utterance”. The total entry under “ukti” fromwhich “bakko” and “bhaSa” are being derived is 45. Therefore, the corre-sponding IC = −log 45/62,500 = 7.24.The high value of IC indicates that these words are highly correlatedwhich agrees with the survey results.(2) paHaR “Mountain” and Nouka “Boat”.“paHaR” and “nouka” can be anticipated as the derivations from a com-mon node, bhromon (tourism). Total number of entry words under “bhro-mon” is 162. Then, the related IC is IC = −log 162/62,500 = 5.95This pair of words also gives a high value. However. the participantsfound these words are not close to each other.RELATIONAL MODEL OF CONCEPTUAL DISTANCE BETWEEN BANGLA WORDS 171(3) Pakhi “bird”- rikSa “rikshaw” and kak “crow”- kokil “cuckoo”.In Samsad samarthasabda kosh rikSa is accumulated under theheading of Goti-probaho-poribahan “speed-wave-transportation”.“Goti-probaho-poribahan” consists of 2160 entries. But there is no entry forPakhi.Samsad Samarthasabda Kosh does not give any clue for drawing any treediagram for pakhi and rikSa also there is no WordNet in Bangla. If we canassume a node such as “Goti-probaho-poribahan”, under which</s>
|
<s>pakhi andrikSa could possibly be undertaken, we can assumingly calculate someresults. Then for “Goti-probaHo-poribOHon” (movable object or concept),the IC is IC = −log 2160/ 62,500 = 3.36 (medium index).The participants did not find these words close to each other. In a sameway, we have calculated Kak and kokil, which very obviously branched outfrom a common node, pakhi “bird”. Entry under “pakhi” in Samsad sa-marthasabda kosh is 340. Here the IC is IC = −log 340/62,500 = 5.21. Thisindicates that the words are close but this pair of words was not in our list.6. CONCLUSIONIn this paper an attempt has been made to shed some light on the cognitiveprocess by which a subject relates a pair of words in a particular socio-linguistic context. To set a quantitative measure of the distance betweentwo words a survey was conducted. Thirty people with similar socio cul-tural background were given 20 pairs of Bangla words each and were askedto find distance between them within a particular range. Some of the pair ofwords seem to be immediately related to each other but there were otherswhich were not apparently related as for as semantic aspect is concerned.Take, for example, the word /puruS/ “male” and SiNHo) “lion”. If onemakes a tree, as shown earlier, one has to go to the word “mammal” per-haps, which includes a large number of creatures in the animal kingdom.However, some of the respondents found these words closer than what sug-gested by the tree diagram (Figure 6). This may be owing to the fact thatthe compound word puruS-SiNHo “male-lion” is of quite common usage inBengali language. Compounds words like puruS-SiNHo “male-lion” andpuruS-bEghro “male-tiger” belong to the class of compound words (samasain Sanskrit) which is called (upameya karmadharaya) “puruS-SiNHo” meansa man who is as brave as a lion.172 S. MUKHOPADHYAY ET AL.Hence it was not surprising that the words puruS and SiNHo werethought to be close to each other and the mean distance from the respon-dents” scores turned out to be 1.33 out of 4. The words like baHon “vehi-cle” and iNdur “rat” (these words are not the part of the survey samples)would seem further away than the words “puruS” (male) and “siNHo”(lion). However, any Indian brought up in Hindu culture or acquainted withHindu mythology would immediately establish a relation between the twowords as Indian mythology “iNdur” (rat) is the vehicle of the god lordGanesha who is widely known outside India as the elephant god. In theAppendix the average scores assigned against the pair of sample words by30 individual are given.−log(p) calculation for puruS “man” – SiNHo “lion” is given below.These two words have derived from a common higher node, i.e. Pran-prani-sorir “life-animal-body”, entry under which is 1115. −log 1115/62,500 = 4.03.Here the index suggests that the words are related to a certain extent. Butthe survey suggests that these words are pretty close.One can find many instances where semantically distance words mayseem closer to a particular set of individuals brought up in any specificsocio-cultural background. The study made here may be considered as apilot study</s>
|
<s>on a subject on which, to the best of our knowledge, no workhas been done so for in Bangla and very little in other Indian languages.We hope our study will stimulate similar studies in Bangla in particular andIndian languages in general and would be helpful towards building up aproper Bangla WordNet.Fig. 6. Conceptual tree for “male” and “lion”.RELATIONAL MODEL OF CONCEPTUAL DISTANCE BETWEEN BANGLA WORDS 173ACKNOWLEDGEMENTThe survey used in this study was conducted with the help of Baidehi Sengupta and enor-mous supports for some other issues came from Rimi Ghosh Dastidar. We are thankful tothem. We are also grateful to all the people who participated in the survey. We are gratefulto the referee for constructive suggestions.REFERENCESBollegala, D., Matsuo, Y., & Ishizuka, M. (2009). A relational model of semantic similaritybetween words using automatically extracted lexical pattern clusters from the web.Proceedings of Empirical Methods in Natural Language Processing (EMNLP 2009),pp. 803–812 http://www.iba.t.u-tokyo.ac.jp/~danushka/papers/danushka-EMNLP2009.pdfCollins, A., & Loftus, E. (1975). A spreading activation theory of semantic processing. Psy-chological Review, 82, 407–428. http://homepage.psy.utexas.edu/homepage/faculty/Markman/PSY394/CollinsLoftus.pdfDasgupta, T., Choudhury, M., Bali, K., & Basu, A. (2010). Mental representation and accessof polymorphemic words in Bangla: Evidence from cross-modal priming experiments.International Conference on Natural Language Processing (ICON), 58–67.De Saussure, F. (1916 – 1974). Course in General Linguistics. Tr. by Wade Baskin. London:Fontana/Collins.De Saussure, F. (1916 – 1983). Course in General Linguistics. Tr. by Roy Harris. London:Duckworth.Goldstone, R. L. (1994). Similarity, interactive activation, and mapping. Journal of Experi-mental Psychology: Memory and Cognition, 20, 3–28. http://cognitrn.psych.indiana.edu/rgoldsto/pdfs/siam.pdfJiang, J. J., & Conrath, D. W. (1997). Semantic similarity based on corpus statistics andlexical taxonomy. International Conference Research on Computational Linguistics(ROCLING X) (September 1997)Lin, D. (1998). Automatic Retrieval and Clustering of Similar Words. COLING-ACL98,Montreal, Canada, August 1998. http://webdocs.cs.ualberta.ca/~lindek/papers/acl98.pdfQuillian, M. R. (1968). Semantic Memory. In M. Minsky (Ed), Semantic InformationProcessing (pp. 216–270). Cambridge, MA: MIT Press.Resnik, P. (1995). Using information content to evaluate semantic similarity in a Taxonomy.Proceedings of the 14th International Joint Conference on Artificial Intelligence, 1,pp. 448–453, Montreal, August 1995.Richardson, R., & A. F. Smeaton. (1995). “Using WordNet in a Knowledge-Based Approachto Information Retrieval”, Working Paper, CA-0395, School of Computer Applica-tions, Dublin City University, Ireland.Sahami, M., & Heilman, T. D. (2006). A web-based kernel function for measuring the simi-larity of short text snippets. Proceedings of the 15th International World Wide WebConference (WWW), Retrieved 2006, from http://robotics.stanford.edu/users/sahami/papers-dir/www2006.pdf174 S. MUKHOPADHYAY ET AL.http://www.iba.t.u-tokyo.ac.jp/~danushka/papers/danushka-EMNLP2009.pdfhttp://www.iba.t.u-tokyo.ac.jp/~danushka/papers/danushka-EMNLP2009.pdfhttp://homepage.psy.utexas.edu/homepage/faculty/Markman/PSY394/CollinsLoftus.pdfhttp://homepage.psy.utexas.edu/homepage/faculty/Markman/PSY394/CollinsLoftus.pdfhttp://cognitrn.psych.indiana.edu/rgoldsto/pdfs/siam.pdfhttp://cognitrn.psych.indiana.edu/rgoldsto/pdfs/siam.pdfhttp://webdocs.cs.ualberta.ca/~lindek/papers/acl98.pdfhttp://robotics.stanford.edu/users/sahami/papers-dir/www2006.pdfhttp://robotics.stanford.edu/users/sahami/papers-dir/www2006.pdfSánchez-Casas, R., Ferré, P., García-Albea, J. E., & Guasch, M. (2006). The nature ofsemantic priming: Effects of the degree of semantic similarity between primes andtargets in Spanish. The European Journal of Cognitive Psychology, 18, 161–184.http://psico.fcep.urv.es/projectes/gip/papers/sc_f_ga_g_2006.pdfThompson-Schill, S. L., Swick, D., Farah, M. J., D’Esposito, M., Kan, I. P., & Knight, R. T.(1998). Verb generation in patients with focal frontal lesions: A neuropsychologicaltest of neuroimaging findings. Proceedings of the National Academy of Sciences, 95,15855–15860.RELATIONAL MODEL OF CONCEPTUAL DISTANCE BETWEEN BANGLA WORDS 175http://psico.fcep.urv.es/projectes/gip/papers/sc_f_ga_g_2006.pdftimto 15to 15ম্পুরুষট্টlli্দিমুখite্গলখুন176 S. MUKHOPADHYAY ET AL.Copyright of Journal of Quantitative Linguistics is the property of Routledge and its contentmay not be copied or emailed to multiple sites or posted to a listserv without the copyrightholder's express written permission.</s>
|
<s>However, users may print, download, or email articles forindividual use. Abstract 1. Introduction 2. Related Work 3. Conceptual Distance 3.1 Phonetic Similarity 3.2 Relationship Between Synonymous Words 3.3 Conceptual Relation between Words 4. Current Method 5. Experiment 5.1 Semantic Similarity Based on Corpus Linguistics 6. Conclusion Acknowledgement References Appendix I</s>
|
<s>Syntactico- semantic subject( karta) in BanglaSyntactico- semantic subject (karta) in Bangla Pragati Dhangα, Sanjay Chatterjiα, Tanaya Mukherjee Sarkarα, Sudeshna Sarkarα, , Jayashree Chakrabortyβ Anupam Basuα αDepartment of Computer Science and Engineering, Indian Institute of Technology, Kharagpur βDepartment of Humanities And Social Sciences, Indian Institute of Technology, Kharagpur Abstract Identification of subject (karta) in the Indian languages has been a difficult task. This is mainly because of the varied use of bibhakti markers, there being no one to one correlation between the semantics of a marker and the corresponding context in which it is used. The present paper identifies karta in Bangla sentences based on modern Bangla grammar. When the task is assigned to a computer through the dependency structure of sentences, it prefers syntactic dependencies. Other than karan and apadan, all the karakas take same case markers or suffixes or bibhakti markers. The concentration is in the identification of karta based on logical and syntactic (syntactico-semantic) point of views. Keywords : Syntactico-semantic relation, Modern Bangla grammar and language 1. Introduction In Bangla, subject (karta) is one of five karaks. The subject (karta) of a Bangla sentence is a noun phrase which indicates a person or a thing that performs an action, experiences something, undergoes an action or exists somewhere. Karta takes the different bibhakti markers Dhang 2 like ‘te’(‘েত’), ‘ke’(‘েক’), ‘e’(‘এ’), ‘aYa’(‘য়’) and ‘ra’(‘র’) which are also shared by other karaks. In Bangla, the karta may take a wide range of bibhakti markers or suffixes like nominative, genitive, accusative and locative and thus there is no simple rule to identify the subject (karta) based on the bibhakti markers only. In Bangla, it is difficult to analyse karta based on purely syntactic features. In several contexts, it is found that karta takes some other bibhakti markers which indicate other types of classification of karak relations. For the purpose of identification of the subject, the Paninian grammatical model [2] and some other modern grammar books [1,4] have been studied. We have used syntactico-semantic features for identifying karta. Syntax indicates the characteristics of sentence, the order or sequence of words and agreement of words in the sentence and semantics indicates the meaningful classification of words [1]. Syntax indicates the grammatical features of a sentence, whereas semantics indicates its meaning. The term syntactico-semantic indicates both syntax and semantics. Since syntax markers do not distinguish the true picture of karta of the Bangla sentence, semantics needs to be also used to identify the karta and its role. There are instances of kartas in the same syntactic or semantic role with different bibhakti markers as well as kartas of different syntactic or semantic roles with same bibhakti marker. A few examples of each of these kartas are in order. For all examples we have used Bangla font, itrans, gloss and its English translation respectively throughout the paper and the bold words indicate karta. The abbreviation of gloss is written as Genitive-gen, locative- loc, accusative- acc, present tense- pres, past tense- past, perfective- perf, progressive- prog, participle-</s>
|
<s>ppt, future- fut, negative- neg. Dhang 3 Some examples of same syntactic role with different bibhakti markers: Karta with nominative marker. 1. রাম ভাত েখল| rAma bhAta khela. Ram rice eat-past. Ram ate rice. Karta with locative marker. 2.মানুেষ কথা বেল| mAnuShe kathA bale. Men-loc word speak. Human beings speak. 3. রাজায় রাজায় যু� কের| rAjAY rajAY yuddha kare. King-loc king-loc fighting do. The king fights with a king. In the above examples, all the kartas indicate the same syntactic role but have taken different bibhakti markers. Dhang 4 Some examples of same semantic role and different bibhakties Karta with nominative marker. 4. আিম গান ভালবািস| Ami gAna bhAlabAsi. I song love I love song . Karta with genitive marker. 5. আমার িখেদ েপেয়েছ| AmAra khide peYechhe. I-gen hunger get. I am feeling hungry. In the above examples, all the kartas indicate the same semantic role but have taken different bibhakti markers. Some examples of different syntactic roles and same bibhakti 6. আমার শীত করেছ| AmAra shIta karachhe. I-gen cold do-pres- prog. I am feeling cold. Dhang 5 7. এই ভােব তারার সৃি� হয়| ei bhAbe tArAra sriShTi haYa. This way star-gen formation be-pres. In this way the stars are created. 8. আমার খাওয়া হয়িন| AmAra khAoYA haYani. I-gen eating be-pres neg I have not eaten. In the above examples, the words in bold font have the same syntactic features but their roles are different. To identify their roles semantics must be used. Here, the words AmAra, tArAra and AmAra indicates feelings, undergoes an action and performing an action respectively. So, both syntax and semantics are used to identify the karta as well as the role of karta which indicates the different types of karta. In a sentence, voice (bachya) plays an important role to identify karta. In active voice (kartribachya) karta of a sentence is identified on the basis of the doer of the action, on the basis of semantics of verbs and on the basis of the complement of the subject where both subject and its complement refers to the same entity.In passive voice(karmabachya) karta of a sentence is Dhang 6 identified as the subject which is the active agent of the sentence and is generally followed by postpositions like ‘dbArA’, ‘diYe’, ‘kartRRika’ etc. This subject may or may not take genitive (‘ra’) marker. In neuter voice or impersonal voice (bhabbachya), it is difficult to identify the exact karta of a clause, as agent or doer subject is silent or acclinated, it does not come in the surface. Here action is emphasized and plays an important role .According to many grammarians, neuter voice or impersonal voice (bhabbachya) is identified by the absence of object and the verb is usually intransitive.This karta takes accusative and genitive markers [1]. The paper is organized as follows; - section 2 presents Objective, section 3 defines Karta and Classification of karta and its analysis, section 4 presents Discussion and finally section 5 presents Conclusion. 2. Objective Two processes are used to annotate the</s>
|
<s>different types of karta in Bangla sentences. The first process is to identify the karta and the second one is to classify the karta based on their roles. For annotating kartas of different types, the karta of a sentence is identified and classified on the basis of syntactic and semantic features. The semantic features of verb used are the natures of the verb – volitional or non-volitional, features of obligation or necessity attached to the verb etc.These semantic features determine whether the karta will be classified as agent or experiencer or noun of proposition and so on. The features are indicated on the karta by syntactic markers like suffixes or bibhakti markers, postpositions and also morphological features like TAM etc. For identification and classification of Karta, syntactic informations as well as Dhang 7 semantic informations are used. In this case, syntactic informations indicate undergoing the action done by an implied agent and semantic informations indicate the type of effort, obligations in doing action.. Semantic informations include animacy of the noun phrase and the nature of the verb with which it is attached. 3. Related Work Several works have been done in defining dependency grammar for different languages. Most of them are used in annotating data. Penn Treebank for English [8], Prague Dependency Treebank for Czech [9] etc. are some of such important efforts. Most of these prior works are inclined to syntactic properties of the constituents. Recently some works have been done on building dependency annotation for Indian languages. Sharma et al. [5] have created a Hindi dependency Treebank named “AnnCorra” using Hindi dependency grammar. They have used Panini's grammatical model [2] which provides syntactico-semantic analysis to some extent. There are many similarities in karta, we have tagged in Bangla and Sharma et al. [5] tagged in Hindi. We have classified karta in five categories namely, Kriya sampadak karta (doer subject), Anubhab karta (experiencer subject), Paroksha karta (passive subject), Bidheya karta or Samanadhikaran (noun of proposition) and Sadharan karta (general subject) whereas they have classified it in six categories namely, karta (doer), prayojaka karta (causer subject), prayojya karta (causee subject), madhyastha karta (mediator causer), karta samanadhikarana (noun Dhang 8 complement of subject) and clausal subject. In Bangla treebank, causative subject (prayojaka karta), mediator causer (madhyastha karta), causee subject (prayojo karta) are not used. The detailed study will be available in “A dependency annotation scheme for Bangla Treebank” [10]. 4. Karta (Subject) Karta is an active agent of the activity implied by the verb which indicates physical or mental exercise. It will be animate noun or personal pronoun. Karta is the experiencer or a perceiver who does not take any active effort. It indicates some mental state, emotion, desires or event. Karta is a person or thing which undergoes some action, has obligation on the action, exists somewhere, and indicates the complement of other karta. Karta may also ‘be’ or ‘become’ something. It may take any suffix. We classify karta into some subclasses. 3.1. Classification of Karta Considering the syntactic and</s>
|
<s>semantic informations, the following roles of karta are important.Based on these roles karta is classified into five finer divisions- a. Doer/agent subject /kriyasampadak karta tagged as k1d b. Passive subject/paroksha karta as k1p c. Experiencer subject/Anubhab karta as k1e d. General subject/ sadharan karta as k1m e.Noun of proposition subject/vidheya/Samanadhikaran karta as k1s (a) Doer/agent subject /kriyasampadak karta(k1d) Dhang 9 Doer/agent subject /kriyasampadak karta is a person or an animate object being actively engaged or participated in the action. This karta also make the work to do by someone or by something. Here, the verb is actively indicating some physical or mental exercise of the doer. The doer karta usually takes nominative or zero markers, but in some cases it may also take ‘e’, ‘Ya’, ‘te’, ‘ete’, vibhakti markers. It is tagged as k1d. 9. েছেলিট ফুটবল েখলেছ| ChheleTi phuTabala khelachhe. Boy-class football play-pres-prog. The boy is playing football. 10. িমনা গান গাইিছল| minA gAna gAichhila. Mina song sing-past-prog. Mina was singing a song. 11. পায়রােত গম েখেয়েছ| pAYrAte gama kheYechhe. Pigeon-loc wheat eatpres-perf. Pigeon has eaten wheat. Dhang 10 12. মােয় িঝেয় ঝগড়া কের| mAYe jhiYe jhaga.DA kare. Mother-loc daughter-loc quarrel do. Mother quarrels with her daughter. 13. বািড়র খবর জানান আমার িপিস| bA.Dira khabara jAnAna AmAra pisi. House-gen information gives I-gen aunty. My aunty gives the information of our house. 14. সীতা আয়া �ারা বা�ােক খাওয়াে�| sItA AYA dbArA bAchchAke khAoYAchchhe . Sita nurse by child-acc feed-pres-prog. Sita is feeding to child by the nurse. (b) Passive subject/paroksha karta (k1p) In passive construction, doer of the action is called passive subject/paroksha karta. In other way, when the verb indicates an action in passive construction then its subject is defined as passive subject/paroksha karta. It is generally followed by postpositions ‘diYe’, ‘dbArA’, ‘kartRRika’, ‘hate’, ‘haite’ etc. Sometimes these postpositions are not present in the surface structure,but are implied. It is tagged as k1p. It is the logical subject of the sentence of passive Dhang 11 construction and must be animated. It may also take nominative and genitive marker. The verb is used here as a conjunct verb and past participle form of the verb with affix–‘A’ and the auxiliary verb ‘yA’ (‘go’) and ‘haYa’ (‘is’) etc. 15. সীমার �ারা এই রা�া হল | sImAra dbArA ei rAnnA hala . Sima-gen by this cooking be-past. This dish was cooked by Sima. 16. েদবদাস েলখক শরৎচে�র �ারা রিচত হয়| debadAsa lekhaka sharata chandrera dbArA rachita haYa. Debdas writer Saratchandra-gen by written be-pres Debdas is written by the writer Saratchandra. If the agent of the active voice is present, it is followed by the postposition ‘�ারা’, ‘িদয়া’, ‘িদেয়’, ‘কতৃর্ ক’, ‘হেত’, ‘হইেত’ and ‘েক’ (‘dbArA’, ‘diYA’, diYe’, ‘kartRRika’, ‘hate’, ‘haite’and ‘ke’ respectively)in the passive sentence. In some cases passive constructions take place with the auxiliary verbs ‘পড়া’ (‘pa.DA’ ‘fall’), ‘চলা’ (‘chalA’ ‘go’), ‘হয়’ (‘haYa’ ‘happen’ ) etc. instead of ‘yA’. Dhang 12 17. রামবাবুর �ারা সাপটা মারা েগেছ| rAmabAbura dbArA sApaTA mArA gechhe. Rambabu-gen by snake-class kill go-prs-perf. The snake was killed</s>
|
<s>by Rambabu. 18. জনতার �ারা েচারিট মারা পেড়| janatAra dbArA choraTi mArA pa.De. mob-gen by thief-class kill fall The thief gets killed by the mob. 19. ইংেরজ বািহনীর �ারা মারাঠীরা পরািজত হল| i.nreja bAhinIra dbArA mArAthIrA parAjita hala. English army-gen by marathis defeated be-past. The Marathis had been defeated by the English army. (c) Experiencer /Anubhav karta(k1e) The subject of mental verbs or the verbs which express mental state, emotion, attitude, experience, feelings, perception, or event is identified as Anubhav karta, and marked as k1e. If this subject guesses something, does consider something and regards something or persons then Dhang 13 the subject is also identified as Anubhav karta. K1e takes genitive (‘ra’) marker and nominative marker. Generally, the marker is dependent on the verb with which it is associated. The subjects of some conjunct verbs having ‘haoYA’, ‘pAoYA’, etc are examples of anubhav karta. Here verbs like ‘mane haoYA’, ‘bodha haoYA’, ‘khide pAoYA’, ‘ghuma pAoYA’, etc are used as the verb of anubhav karta. ‘pAoYA’ is used here only as non-volitional verb or stative verb which indicates feeling, not as a sense of receiving. When the verb ‘karA’(‘do’) combining with noun is used as conjunct verb, it acts as mental works of the subject. So the verbs like ‘chintA karA’, ‘mane karA’, ‘rAga karA’,” are not mental verbs because in these verbs karta is doing the action volitionally or working mentally. Some verbs express something which does not indicate mental works of the subject, as for example, ‘bisbAsa karA’, ‘anubhaba karA’, ‘AshA karA’, ‘sanmAna karA’, etc. Here, the nature of the verbs indicates this class of the subject. 20. আমার মেন হয় েস এই কাজটা পারেব| AmAra mane haYa se ei kAjaTA pArabe. I-gen mind-loc be-pres he this work-class can-fut. I think he can do this work. 21. আমার ঘুম েপেয়েছ| Dhang 14 AmAra ghuma peYechhe. I-gen sleeping got-pres-perf I am feeling sleepy. 22. আমার রামেক বুি�মান মেন হয়| AmAra rAmake buddhimAna mane haYa. I-gen Ram-acc intelligent mind –loc be-pres. I consider Ram as intelligent. 23. সীমা অমলেক ভালবােস| simA amalake bhAlabAse. Sima Amal-acc love-loc. Sima loves Amal. 24. রাম খুব আন� েপল| rAma khuba Ananda pela. Ram very happy got-past. Ram was very happy. 25. িতিন মেন খুব ক� েপেয়েছন| Dhang 15 tini mane khuba kaShTa peYechhena. He mind-loc very sorrow got-pres-perf. He was afflicted with sorrow. (d) General subject/sadharan karta (k1m) The karta which undergoes some action, exists somewhere or have obligation on the action is referred to as general subject/ sadharan karta. It is tagged as k1m. In these following sentences, copula or be verb is used. Sometimes this verb is dropped or does not surface in the sentence and in this case<NULL> verb is inserted as in the following examples. This verb indicates state of being and it is used as the words ‘Achhe’ , ‘haYa’, ‘haYechhe’, ‘hai’, ‘chhila’ etc. It can be thing, event, incident, etc which exists. 26. আমােদর �ুল খুব ভাল|<NULL> AmAdera skula khuba bhAla <NULL>. Our school very good Our school</s>
|
<s>is very good. 27. েসখােন অেনক েদাকান আেছ| sekhAne aneka dokAna Achhe. There many shops be-pres. There are many shops. Dhang 16 28. এই বািড়িট খুব পুরােনা|<NULL> Ei bA.DiTi khuba purAno <NULL>. This house-class very old. This is heritage house. 29. েখয়াপারাপার িনেষধ িছল| kheYApArApAra niShedha chhila. Boating prohibition be-past. Boating was prohibited (e) Noun of proposition/ Samanadhikaran karta (k1s) The complements of the karta are called samanadhikaran. The subject and its complement are the same entity which indicates the complement of each other. The complements may be on the basis of meaning, person, objects or things, events, place, incident, etc. 30. নেরন ভাল েছেল| Narena bhAla chhele <haYa>. Nren good boy <haYa>. Naren is a good boy. Dhang 17 31. েদবদা ভাল গায়ক িছেলন| debadA bhAla gAYaka chhilena. Debda good singer be-past. Debda was a good singer. 33. এটা ঐ িদেনরই ঘটনা িছল| eTA oi dinera;i ghaTanA chhila. This-class that day-gen incident be-past. It was an incidence of that day only. 34. িশ�েকর নাম পবন রায়| shikShakera nAma Pabana rAY. Teacher-gen name Paban Roy The name of the teacher is Paban Roy 5. Discussion While analyzing karta according to the categories mentioned above, we realized that there are a number of cases which do not fit properly or neatly in any of these categories. They Dhang 18 give rise to confusion or ambiguity. In the following section, we will discuss these problem cases and how they have been resolved. A noun phrase ending with ‘ra’ marker and followed by a conjunct verb poses a problem regarding identifying the subject. 35. �চ� শি�র উ�ব হয়| prachanDa shaktira udbhaba haYa. heavy energy-gen formation be-pres Heavy energy is formed. 36. এই ভােব তারার সৃি� হয়| ei bhAbe tArAra sriShTi haYa. this way star-gen creation be-pres. In this way the stars are created. 37. হঠাৎ . আমার ৈচতেনয্াদয় হল | haTAt AmAra cha;itaNyodaYa hala suddenly i-gen realization be-past Suddenly I became realized. Dhang 19 38. এ ধরেনর উ�ারেণর উ�ব ঘেটেছ পােঠর জনয্| e dharanera uchchAraNera udbhaba ghaTechhe pAThera janya. this type-gen pronounciation-gen formation happen-pres-perf lesson-gen for. This type of pronounciation is formed for the lesson. In these examples, we identify the noun with ‘ra’ marker as karta. The reason is that in all these examples the verb relates to some happening or the occurrence of some state and the person or entity undergoes this happening. When, the noun or pronoun with ‘ra’marker is followed by another noun and there is a sense of possession between these two nouns, the relation between them is that of ‘sambandha’. So the noun or pronoun with ‘ra’marker is not considered the subject but the head noun of this noun phrase is identified as the subject. In the following examples, the words ‘AmAra’, ‘AmAdera’, ‘Narendra nAthera’ and ‘tomAra’ are not related with the verb (action). So, the head words ‘pena’, ‘kaleja’, ‘smritishakati’ and ‘kA.Ndha’ will be treated as subject. 39. আমার েপন আেছ| AmAra pena Achhe. i-gen pen be-pres . I have a pen. Dhang 20 40. আমােদর</s>
|
<s>কেলজ খুব ভাল| AmAdera kaleja khuba bhAla. Our college very good. Our college is very good. 41. নের�নােথর �িৃতশি� িছল িব�য়কর| NarendranAthera smritishakti chhila bismaYakara. Narendranath –gen memory be- past wonderful. The memory of Narendranath was wonderful. 42. েতামার কাঁধ চওড়া িছল| tomAra kA.Ndha chao.DA chhila. You-gen shoulder broad be-past. Your shoulder was broad. The semantics of the verb ‘pAoYA’ as main verb and conjunct verb has two folds- i. animate subject receiving something concrete ii. inanimate subject attaining some state 43. েস রাজেকাষ েথেক ভরণেপাষণ পায়| Dhang 21 se rAjkoSha theke bharaNposhaNa pAYa. he treasury from help get-pres. He gets help from treasury. 44. েরাগ �াসবৃি� পায়| roga hrAsabRRiddhi pAYa. disease increased and decreased get.-pres Disease is increased and decreased. 45. দাম বৃি� পায়| dAma bRRiddhi pAYa. cost increase get-pres . The cost is increased. Here the words se, roga and dAma are all sadharan karta(k1m). When the verb ‘pAoYA’ is used as conjunct verb and the main verb relates to the act of experience, feelings, emotions etc., in this context the subject should be an animate subject and the subject takes ‘ra’ and also zero or nominative marker. 46. েতামার িখেদ েপেয়েছ| tomAra khide peYechhe. Dhang 22 You-gen hunger get-pres, prog. You are hungry. 47. েস ক� েপেয়েছ| se kaShTa peYechhe. he hurt get –pres,perf He feels hurt. Here the words tomAra and se are experiencer karta(k1e). When an event is caused by an animate subject which comes in the position of grammatical subject of the sentence, in such cases the subject is the logical subject and it takes “ra” marker. In the following sentences, the word‘dbArA’is not in the surface structure, but it is implied [1]. 48. েতামার খাওয়া হেয়েছ? tomAra khAoYA haYechhe? You-gen eating be-pres,perf. Have you eaten? 49. তার ভাত খাওয়া হেয়েছ| tAra bhAta khAoYA haYechhe. Dhang 23 he-gen rice eating be-pres,perf. He has eaten rice. Here the words tomAra and tAra are passive karta (k1p). In the following sentence, ‘kichhu jinisa’ is considered as object (karma) and not as subject (karta). Here the inherent meaning of the verb ‘chokhe pa.Dlo’ indicates an animate subject. Though the subject does not surface but its occurrence is indicated by the semantics of the verb. 50. রেথর েমলায় িকছু িজিনস েচােখ পড়ল| rathera melAYa kichhu jinisa chokhe pa.Dala. rath-gen fair-loc few thing eye-loc fall-past. In the fair of Rathajatra a few things came to our sight. In the following sentence, the word ‘meYedera’ is identified as subject because of the form of the verb ‘dekhA’ that is the participle form of this verb. Therefore, the grammatical subject of the verb ‘meYedera’ is actually the logical subject here. 51. েমেয়েদর েদখা েনই েকন? meYedera dekhA nei kena? Girls-gen see-ppt be-pres-neg why Dhang 24 Why the girls are not present? 6. Conclusion It can be concluded that use of semantics is required to identify and classify the karta. It is necessary to identify the logical subject and distinguish from the grammatical subject. Semantics is also used to understand the nature</s>
|
<s>of the verb whether it comes as a main verb or as conjunct verb. Through semantics the nature of the verb indicates the role of the karta. Semantics also indicates whether the karta is animate or inanimate. We have used syntactico-semantic version of the grammar for identifying the karta. Both syntax and semantics are used to identify the roles of karta in a Bangla sentence. This notion of grammar may be used in the other dependency relation. This syntactico-semantic relation may be used in a Treebank which may be further used in a statistical parser. The categorization of verb which indicates the semantics or nature of verb is needed. With the help of this idea semantic annotation can be developed. The creation of such resources is expected to improve Bangla NLP system greatly. References [1] Chatterji, Suniti Kumar. “BHASHA-PRAKASH BANGLA VYAKARAN” [A Grammar of the Bangla Language] Rupa, New Delhi, pp 126, 236-239, 363, 2003. [2] Bharati, A., Chaitanya, V., Sangal, R. “Natural language processesing: A paninian perspective” (1999). [3] Gangopadhyay, Malaya. “The noun phrase in Bengali: assignment of role and the kāraka theory”, Publisher: Motilal Banarsidass, Foreign Language Study, 1990. [4] Chakraborty, Bamandeb, “UCHCHATARA BANGLA BYAKARAN” (Higher Bangla Grammar)Akshaya Malancha, Kolkata, 2007. Dhang 25 [5] Sharma, D.M., Sangal, R., Bai,L., Begam, R., “AnnCorra : TreeBanks for Indian Languages”, Language Technologies Research Center, IIIT, Hyderabad, India [6] de Marneffe, M., Manning, C.D. “Stanford typed dependencies manual”, (2008) [7] Chatterji, S., Sarkar, T.M., Sarkar, S. and Chakraborty, J. “Karak Relations in Bengali” Proceedings of 31st All-India Conference of Linguists (AICL 2009), Hyderabad, India, pp 33-36, December, 2009. [8] Marcus, Mitchell P., Marcinkiewicz, Mary Ann and Santorini, Beatrice. “Building a Large Annotated Corpus of English: The Penn Treebank”,Comput. Linguist., MIT Press, June 1993,19(2), pages: 313-330, Cambridge, MA, USA [9] J. Hajic. “Building a Syntactically Annotated Corpus: The Prague Dependency Treebank”, In Issues of Valency and Meaning, pp. 106-132, Karolinum, Praha 1998 [10] Chatterji, Sanjay, Mukherjee Sarkar, Tanaya, Dhang, Pragati, Deb, Samhita, Sarkar, Sudeshna, Chakraborty, Jayshree, Basu, Anupam, “A dependency annotation scheme for Bangla Treebank”, Language Resources and Evaluation, Volume 48 Issue 3, pp 443-477, September 2014 [10] Chatterji, Sanjay, Mukherjee Sarkar, Tanaya, Dhang, Pragati, Deb, Samhita, Sarkar, Sudeshna, Chakraborty, Jayshree, Basu, Anupam, “A dependency annotation scheme for Bangla Treebank”, Language Resources and Evaluation, Volume 48 Issue 3, pp 443-...</s>
|
<s>untitled978-1-4799-6399-7/14/$31.00 ©2014 IEEE Feature based Semantic Analyzer for Parsing Bangla Complex and Compound Sentences Parijat Prashun Purohit, Mohammed Moshiul Hoque, Mohammad Kamrul Hassan Chittagong University of Engineering & Technology, Bangladesh Abstract—Semantic analyzer determines the semantic meaning of the words in a sentence. This paper proposes a semantic analyzer that can semantically parse the Bangla sentences. Without semantic analysis, it is very difficult to find the accurate meaning of the translated sentences in one language into their equivalent sentences in other language. To analyze the Bangla sentences semantically, this study identifies a set of features of each word categories in Bangla. Experimental results show that proposed features can be used effectively for analyzing all kinds of Bangla sentences with semantic analyzer. Keywords—Natural language processing, syntax analysis, semantic analysis, semantic features, anotated parse tree. I. INTRODUCTION Natural Language Processing (NLP) are developed to explore both general theories of human language processing tasks such as providing natural language interfaces or front ends to application system. A language-understanding program must have considerable knowledge about structure of the language including what words are and how they are combined into phrases and sentences. It is also essential to know the meaning of the words and how they contribute to the meaning of the sentence in the context within which they are being used. Semantics is the study of sentence meaning and this meaning is achieved partially by analyzing the syntactic structure (s) and the meaning of the words used in the sentences [1]. The automated creation of accurate and expressive meaning representation necessarily involves in a wide range of knowledge-sources and inference techniques. Among the source of knowledge that are typically involved are the meanings of words, the conventional meanings associated with grammatical constructions, knowledge about the structure of the discourse, common-sense knowledge about the topic at hand and knowledge about the state of affairs in which the discourse is occurring [2]. There are many ways of thinking about and representing word meanings, but one that has proved useful in the field of machine translation involves associating words with semantic features which correspond to their sense components. Associating words with semantic features is useful because some words impose semantic constraints on what other kind of words they can occur with. In this work, semantic analysis will perform by assigning the features of each word in a sentence that are based solely on knowledge gleaned from the lexicon and the grammar. Semantic analysis plays vital role in solving the vagueness in sentence meaning. It is observed that in the sentences semantic properties of words needs to be analysed explicitly for the actual output making. In the dictionary, semantic feature of words is maintained categorically for semantic analysis. When the parser parses a sentence, the words are regained with semantic characteristics that actually establish the word meaning in a sentence. Semantic properties of words are comprised of three facets: domain, context and the task, and the semantic structures constructed from utterances must account these areas. In the knowledge</s>
|
<s>domain, semantic analysis records individual words into appropriate objects, and it must create the correct structures to communicate the meaning of the individual words combined with each other. Semantic analysis of Bangla language is a very challenging task due to its varieties of word formation and the ways spoken. Moreover, other factors contribute to the difficulty of semantic analysis, including words with multiple meanings, sentences with multiple grammatical structures, uncertainty about what a pronoun refers to, and so on. Some research scholars have already analyzed the Bangla sentences in syntactic way. However, studies on semantic analysis of Bangla sentences are rare and limited. In addition, guidelines are also inadequate for the semantic analysis of different word categories and sentences. To introduce Bangladeshi products in the global market, it is necessary to write product instructions in various languages. In this regard, an automatic translator is the acceptable candidate. The automatic translator is capable of translating information faster than human translators that can save a lot of time and money. However, semantic analysis plays a potential role in generating the exact meaning of Bangla sentences translated into other language. Consider an example, ‘goru akashe ure ( : the cow fly in the sky)’. This sentence is syntactically correct but semantically wrong because we know that the cow (goro) cannot fly. Thus, to produce the legal structure and accurate translation of a sentence semantic analysis is mandatory. In this paper, a semantic analyzer is proposed to parse a variety of Bangla sentences semantically. To achieve this objective, a group of semantic attributes are recommended for the diverse Bangla word classes. The proposed semantic analyzer is experimented by analyzing a wide range of sentences with inconsistent word lengths, and the findings shows that the analyzer functions well to parse the Bangla sentences semantically. II. RELATED WORK Bangla language processing is in the preliminary stage. Very few researches have been conducted on semantic analysis of Bangla sentences but a significant number of research works have been conducted on the recognition of Bangla alphabets [3, 4]. Some works has been done on syntax analysis of Bangla simple sentences using CFG’s [5, 6], and CSG’s [7]. Syntax analysis using CFG’s for Bangla simple sentence, complex sentence and compound sentence are presented in [8, 9]. Ali et al. [10] propose a set of rules for morphological analysis to describe Bangla universal networking language. Some recent works focused on designing the machine translation system for Bangla language such as phrase-based [11], example-based [12, 13], rule based [14], and statistical approach [15]. Recently, semantic features of Bangla words with redundancy rules are presented in [16]. A new approach of Mridha et al. [17] is to solve semantic ambiguity problem of Bangla Root words using universal networking language. Basic HPSG structure was proposed in [18] for recognition of semantic correctness. Richardson and his colleagues [19] developed a general methodology for acquiring, structuring, accessing, and exploiting semantic information from natural language text. Bangla simple sentences have analyzed semantically using lexical semantics approach</s>
|
<s>[20]. A recent work has focused on assigning semantic features for parsing Bangla sentence semantically. However, this work was limited to analyze the Bangla simple sentence alone, and features are designed for limited word categories. Thus, in the previous literatures, semantic analysis of complex and compound sentences are remained unexplored. In this paper, we propose a framework for semantic analyzer to parse the Bangla complex and compound sentences. III. PROPOSED FRAMEWORK Semantic analyzer deduce the semantic meaning of words in a sentence. In order to perform the semantic analysis of sentences, a lexicon should be implemented with several semantic features. Fig. 1 illustrates the schematic representation of our proposed framework. Following subsections provides the description of this framework in details. A. Input Sentence The source language sentences which to be parsed are taken as an input of the system. For semantic analysis, we have to choose complex and compound sentences of Bangla. For example, a compound sentence, “je amake porabe se amar bondhu ( )” may be considered for an input. B. Scanner Scanner is the program module that accepts a sentence to be parsed as an unbroken string, and breaks into individual words is called token [5]. Tokens are stored in a list for further access. The token is then checked into the lexicon for validity, some words, if necessary, should be combined into groups, because, two or more words may represent a single word type. From input sentence, scanner will generate a set of tokens. Such as, in our example we have six tokens which are Subordinator (je), Pronoun (amake), Verb (porabe), Subordinator complement (se), Pronoun (amar), and Noun (bondhu) respectively. Fig. 1. Proposed framework of semantic analyzer. C. Lexicon A lexicon is a dictionary of words where each word contains some syntactic, semantic and pragmatic information. The semantic properties of words may include their types, number, gender, person, etc. [21]. Essentially, entries in an MT dictionary will be equivalent to collections of attributes and values (i.e. features). These features will be used for next steps. Valid tokens will also be provided from Lexicon. For the above example six valid tokens are produced: ‘je’, ‘amake’, ‘porabe’, ‘se’, ‘amar’, and ‘bondhu’ respectively. We must assign values for semantic features of each token. For these tokens, the semantic features are assigned in the following: েয [je] (Subord): [Honor(0),Number(1),Human(1),Agent(1),Alive(1)] [amake] (Pronoun): [Person(0),Animate(1),Human(1),Number(1),Honor(0)] পড়ােব [porabe] (Verb): [Person(0),Animate(1),Human(1),Intelligent(1),Honor(1),Agent(1)] [se] (Subcom): [Honor(0),Number(1),Human(1),Agent(1),Alive(1)] [amar] (Pronoun): [Person(0),Animate(1),Human(1),Number(1),Honor(0)] [bondhu (Noun): [Animate (1), Human (1), Alive (1), Intelligent (-1), Agent (1), Gender (1), Adult(-1)] Here, amake (আমােক) is a pronoun and first person. Thus, for first person we assign value 0. For second and third person the value is 1 and 2 respectively. One of the semantic features is animalism and humanism that values 1 in each case. As it is singular in number we assign 1. For plural number we assign 2. Finally, the token is non honorable and so we put 0 in Honor feature. Similarly, we have put values for other two tokens. It should</s>
|
<s>be mentioned if a feature for a token is not significant or confusing, we can assign (-1) to mean it. Generally there are five categories of Bangla word (i.e., part of speech) are found in Bangla language. Table I shows some semantic features of Noun ( ) category. TABLE I. SEMANTIC FEATURE OF NOUN Words Semantic Features nimate gent onor ber Intelligent dult ender Biggani (িবjানী) 1 1 1 -1 -1 1 1 -1 Durbrittora (দবুৃর্tরা) 1 1 1 0 0 -1 -1 -1 Pakhigulo (পািখগুেলা) 1 0 1 0 0 0 -1 -1 Jubok (যুবক) 1 1 1 -1 1 1 1 0 For verb ( ) categories, propose semantic features are listed in Table II. TABLE II. SEMANTIC FEATURE OF VERB Words Semantic Features Person nimate Intelligent onor gent Haschen (হাসেছন) 2 1 1 -1 1 1 Likhi (িলিখ) 0 1 1 1 0 1 Hati (হাঁিট) 0 1 -1 -1 0 1 Jano (জােনা) 1 1 1 1 0 1 A negative value of feature will assigned for a word that is not significant or relevant to it. We have assigned the features for Pronoun, Adjective, and Adverb in similar way. Some examples are presented in Tables III, IV, and V respectively. TABLE III. SEMANTIC FEATURE OF PRONOUN Words Semantic Features Person nimate ber onor Tomader (েতামােদর) 1 1 1 0 0 Unara (uনারা) 2 1 1 0 1 Amake (আমােক) 0 1 1 1 -1 TABLE IV. SEMANTIC FEATURE OF ADJECTIVE Words Semantic Features Animate ender Buddhimoti (বুিdমতী) 1 1 0 Soktiman (শিkমান) 1 1 1 Khusi (খুিশ) -1 1 -1 Sohoj (সহজ) -1 1 -1 TABLE V. SEMANTIC FEATURE OF ADVERB Words Semantic Features phasis nimate Intelligent onor Tense Besh (েবশ) 1 -1 -1 -1 -1 -1 Agamidin (আগামীিদন) 0 0 0 -1 -1 2 Ekhon (eখন) 0 0 1 -1 -1 0 Onek (aেনক) 1 -1 1 1 -1 -1 Sorbotro (সবর্t) 1 -1 -1 -1 -1 -1 Table VI and Table VII illustrates the semantic features for various markers that are used in conjunction with the complex and compound sentence as described in [23]. These markers indicate whether it is complex or compound sentenced and divide the entire sentence into two or more clauses for analysis [7]. Most of the cases, these markers are found as a pair in the complex sentence. TABLE VI. SEMANTIC FEATURES OF COMPLEX MARKERS Pair of Markers Honor ber gent live Jodi ( ) Tahole ( ) -1 -1 -1 -1 -1 Jini ( ) Tini ( ) 1 1 1 1 1 Jokhn (যখন) Tokhn (তখন) 0 -1 0 0 0 ( ) ( ) -1 1 0 0 1 Jara ( ) Tara ( ) 0 0 1 1 1 Je( ) Se( ) 0 1 1 1 1 Jekhane ( ) Sekhane ( ) 0 -1 0 0 0 TABLE VII. SEMANTIC FEATURE OF CONJUNCTION MARKERS Markers Negative ptional onor ontradictory Sequence Ebong (eবং) 0 0 -1 0 0 Othoba ( ) 0 1 -1</s>
|
<s>0 0 Kintu ( ) 1 -1 -1 1 0 O(o) 0 0 -1 0 0 Boroncho ( ) 1 0 -1 1 0 Tai (তাi) 0 0 -1 0 1 D. Rule Generator The most common way to represent grammar is a set of production rules which says how the parts of speech can put together to make grammatical sentences. CSG are the methods of describing language. A set of CFG and CSG rules [7, 8] will also be used in our framework. Tokens are matched with rules to form a parse tree in parser. In our example, we will need a set of CFG rules that will generate “Subord P V Subcom P N” to be matched with tokens. So, the set of rules that will be provided to parser are: S → CS; CS → IC DC; IC → Subord SS1; SS1 → NP VP; NP → P; VP → V; DC → Subcom SS2; SS2 → NP NP; NP → P NP → N; Subord → je (েয); P → amake ( ); V → porabe (পড়ােব); Subcom → se ( ); P → amar ( ); N → bondhu ( ) E. Parser The function of the parser is to take an input string or sentence and produce a parse tree according to CFG rules. Output tokens of the scanner will match with the appropriate grammatical rules. If the right hand symbols of a rule are matched with a token then the token is assigned with the appropriate word category. Error handler will check the possible errors while parsing and resolve these if necessary. The output of the parser is generally a parse tree or can be represented in the list form of data structure [22]. For our example, we can represent the parse tree as the following list form. S(CS(IC(Subord, je)(SS1(NP(P, amake))(VP(V, porabe)))) (DC (Subcom, se)(SS2(NP(P, amar))(NP(N, bondhu))))) F. Semantic Analyzer For MT engine, the semantic attributes are necessary for transfer as well as generation phases. Semantic attributes must be transferred into target language in such a form so that the generation of target output is semantically correct. The words and their semantic attributes of different categories are stored in a lexicon and they are retrieved while parsing. Thus, the parse tree goes to be assigned with semantic features. After generating the parse tree, the analyzer incorporates all the features of word according to category that are assigned in the lexicon. As a result, the analyzer generates an annotated parse tree for the input sentence. Leaves of an annotated parse tree will contain specific semantic features. A sentence whether correct or not, does not depend on all semantic features of the word. We compared such features that are significant to check a sentence’s correctness. The features of a token that were stored in lexicon will now be added in parse tree to form the annotated parse tree. Fig. 2 depicts the annotated parse tree for our example. Fig. 2. Annotated parse tree for the</s>
|
<s>complex sentence, “je amake porabe se amar bondhu (েয আমােক পড়ােব েস আমার বnু)”. G. Evaluator The evaluator verifies the correctness of the input sentence on the basis of semantic features. All tokens with appropriate features are stored in the lexicon. After generating the annotated parse tree relevant feature values will map. If the feature values are matched, then the sentence will be assigned semantically correct. Otherwise it will be incorrect. Fig. 3 illustrates the mapping of features for the complex sentence, Jadi tumi boi poro tahole tumi valo korbe (যিদ তুিম বi পেড়া তাহেল তুিম ভােলা করেব). In this example, the analyzer first maps the features of complex markers (‘Jadi’ and ‘tahole’). If all the features contains same value (either either 0 or 1) then the analyzer check the semantic correctness of sub sentences. Here two modules map features of two sub sentences: ‘tumi boi poro’ and ‘tumi valo korbe’. In each module corresponding features are retrieved first with values. If the value is don’t care (-1) then we avoid the mapping. In module1, Animate has -1 value. Therefore, no mapping is needed. If two tokens have common features and they have either 0 or 1 then the analyzer maps those pair-by-pair. Matching pair are shown by an arrow. All the mapping must have green arrow to be semantically correct. If there is a mismatch in value therefore the sentence can be regarded semantically incorrect. In this example, there is no such mismatch. So output from two modules finally verifies the sentence semantically correct. Num (-1)Hum (-1)Age (-1)Ali (-1)Hon (-1)Num (-1)Hum (-1)Age (-1)Ali (-1)Hon (-1)Semantically correctyesSemantically incorrectModule 1 ( ) Module 2 ( )Per (1)Ani (1)Hum (1)Num (1)Hon (0)Per (1)Ani (-1)Hum (1)Int (1)Hon (0)Age (1)Per (1)Ani (-1)Hum (1)Int (1)Hon (0)Age (1)Per (1)Ani (1)Hum (1)Num (1)Hon (0)Semantically correctSemantically correctSemantically correctno noyesyesFig. 3. Features mapping process diagram for the complex sentence, jadi tumi boi poro tahole tumi valo korbe (যিদ তুিম বi পেড়া তাহেল তুিম ভােলা করেব). H. Output The semantic analyzer generates the parsing output of a given input sentence with appropriate decisions (i.e., semantically correct or not). It will display a tag, such as ‘semantically correct’ or ‘semantically incorrect’ depending on the correct or wrong sentences. In addition to that it also provides the information about tokens, grammatical rule used; parse tree information and semantic features involved in the parsed sentence. IV. EXPERIMENTAL RESULTS A. Output of Semantic Analyzer A complex or compound sentence is combination of simple sentences. Two simple sentences are connected with a conjunction (�����). If two sentences are individually correct then overall evaluation of sentence is also correct. A sentence consisting of one principal clause and one or more sub-ordinate clauses is a complex sentence. For complex sentence two clauses are required to be verified. If principal clause and subordinate clause are semantically correct then it is correct. A snapshot of the result of analyzer for a complex sentence are shown in Fig. 4. Fig. 4. Semantic analysis of Bangla complex sentence, jadi tumi boi</s>
|
<s>poro tahole tumi valo korbe (যিদ তুিম বi পেড়া তাহেল তুিম ভােলা করেব). This figure indicates the semantic features for the input sentence, “jadi tumi boi poro tahole tumi valo korbe (যিদ তুিম বi পেড়া তাহেল তুিম ভােলা করেব)” as follows: যিদ (jadi): [Hon(-1),Num(-1),Hum(-1,Alive(-1,Agent(-1)] তুিম (tumi): [Ani(1),Hum(1),Per(1),Num(1),Hon(0)] বi (boi): [Ani(0),Hum(0),Ali(0),Int(1),Age(-1,Gen(-1), Adu(-1),Hon(-1)] পেড়া (poro): [Ani(1),Hum(1),Int(1),Hon(0),Per(1),Age(1)] তাহেল (tahole): [Hon(-1),Num(-1),Hum(-1,Alive(-1,Agent(-1)] তুিম (tumi): [Ani(1),Hum(1),Per(1),Num(1),Hon(0)] ভােলা (valo):[Ani(-1),Hum(-1),Ali(0),Int(-1),Age(0,Gen(1), Adu(-1),Hon(-1)] করেব (korbe): [Ani(1),Hum(1),Int(-1),Hon(0),Per(2),Age(1)] The sentence is correct syntactically as well as semantically. If we consider the markers “jodi (যিদ)” and “tahole (তাহেল)” both of these complex markers have similliar semantic features. Then we check for the semantic correctness of individual clauses. The two clauses are individually correct in semantic manner. So output is thereby semantically correct (ভাবগতভােব শdু). A sentence having more than one principal clauses linked by one or more corordinating conjunctions prededed by a comma is called compound sentence. Conjunction are used in Bangla compound sentences are ebong (eবং), othoba ( ), kintu ( ), o (o), and so on. An analysis of the compound sentence, durbrittora hamla kore ebong pulishra greftar kore (দবুৃর্tরা হামলা কের eবং পুিলশরা েgফতার কের) is shown in Fig. 5. Fig. 5. Semantic analysis of Bangla compound sentence, durbrittora hamla kore ebong pulishra greftar kore (দবুৃর্tরা হামলা কের eবং পুিলশরা েgফতার কের). This compound sentence consists of two simple sentences and these are checked for their correctness. To determine the sentence as compound we required the features of the token ‘ebong (eবং)’. When the features were matched the compound sentence is subdivided into two portions. Each will verify for semantic correctness. Semantic features of all the words in the example sentence are illustrates in the following: দবুৃর্tরা (durbrittora): [Ani(1),Hum(1),Ali(1),Int(-1),Age(1,Gen(-1), Adu(1), Hon(0),Age(1)] হামলা (hamla): [Ani(0),Hum(0),Ali(0),Int(-1),Age(0,Gen(-1),Adu(-1),Hon(0)] কের (kore): [Ani(1),Hum(-1),Int(-1),Hon(0),Per(2),Age(1)] eবং (ebong): [Neg(0),Opt(0),Hon(-1,Con(0,Seq(0)] পুিলশরা (pulishra): [Ani(1),Hum(1),Ali(1),Int(-1),Age(1,Gen(-1),Adu(1), Hon(0),Age(1)] েgফতার (greftar): [Ani(0),Hum(0),Ali(-1),Int(-1),Age(-1,Gen(-1),Adu(-1), Hon(0)] কের ( kore): [Ani(1),Hum(-1),Int(-1),Hon(0),Per(2),Age(1)] B. Performance Analysis Our proposed system can analyze simple, complex and compound sentences of Bangla. We have tested the system for 1120 sentences in total with several of sentence length. Among them a total of 550 were complex sentences and rest of them was compound sentences. Table III summarizes the accuracy measures for different length of sentences. TABLE VIII. PERFORMANCE EVALUATION OF PROPOSED SYSTEM Sentence ype o. of input sentences ord engths o. of correctly parsed sentences rror ccuracy plex 200 5 200 0.00% 100% 100 6 100 0.00% 100% 150 7 147 2% 98% 80 8 67 16.25% 83.75% 20 9 14 30% 70% pound 100 5 100 0.00% 100% 110 6 110 0.00% 100% 250 7 249 0.4% 99.6% 80 8 70 12.5% 87.5% 30 9 20 33.33% 66.67% Here, ‘accuracy’ refers to the ratio between the total number of sentences that are correctly parsed and total number sentences that are inputted in the framework. The ‘sentence length’ refers to the total number of words in a given sentence. Sentences are collected from different Bangla books and newspapers. The input sentence length varies from 5 to 9. The result of analysis of accuracy for different sentence</s>
|
<s>length is illustrated in Fig.5. This result reveals that the accuracy of the system varies sharply with increasing word length for both types of sentences. Fig. 6. Analysis of accuracy vs. sentence length. V. CONCLUSION In any language semantic analysis can play a significant role in machine translation and language understanding problems. Semantic properties of words of a source language must be considered to generate a meaningfully correct machine translation. For proper machine translation from Bangla to English or vice versa, semantic features of Bangla words should be structured. Our proposed framework can analyze the complex and compound sentences with a set of semantic features. Experimental result shows that the performance of the system is quite good. Semantic analyzer potentially serves in language to combine the meaning of words and phrases. Future extension can be possible to include more features to analyze Bangla idioms and phrases, complex phrases, and punctuation symbols. References [1] D. W. Patterson, Introduction to Artificial Intelligence and Expert System, Printice Hall, India, 2002. [2] D. Jurafsky, and J. H. Martin, Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition, Prentice Hall, Englewood Cliffs, New Jersey, 2006. [3] M. K. I. Molla, and K. M. Talukder, Bangla number extraction and recognition from document image, Proc. of 10th Int. Conf. on Computer and Information Technology (ICCIT), pp. 512-517, 2007. [4] M. M. Hoque, M. R. Karim, M. G. Hossian. M. S. Arefin and M. M. U. Hasan, “Bangla numeral recognition engine (BNRE)”, Proc. of Int. Conf. on Electrical and Computer Engineering (ICECE), pp. 644-647, 2008. [5] M. M. Murshed, “Parsing of bengali natural language sentences”, In Proc. International Conference on Computer and Information Technology, ICCIT'98, Dhaka, Bangladesh, pp. 185-189, vol. 1, 1998. [6] M. R. Selim and M. Z. Ikbal, “Syntax analysis of phrases and different types of sentences in Bangla”, In Proc. International Conference on Computer and Information Technology, ICCIT '99, Sylhet, Bangladesh, pp. 175-186, vol. 2, 1999. [7] M. M. Hoque and M. M. Ali, “Context-sensitive phrase structure rules for structural representation of Bangla natural language sentences”, In Proc. Int. Conf. on Computer and Information Technology (ICCIT), pp. 615-620, 2004. [8] M. M. Hoque and M. M. Ali, “A parsing methodology for Bangla natural language sentences", In Proc. International Conference on Computer and Information Technology, ICCIT’03, Dhaka, Bangladesh, pp. 277-282, vol. 2, 2003. [9] L. Mehedy, S. M. Arefin and M. Kaykobad, “Bangla syntax analysis: A comprehensive approach”, In Proc. International Conference on Computer and Information Technology, ICCIT'03, Dhaka, Bangladesh, pp. 287-293, vol. 5, 2003. [10] M. N. Y. Ali, M. Z. H. Sarker, G. F. Ahmed, J. K. Das, “Rules for morphological analysis of Bangla verbs for universal networking language,” International Conference on Asian Language Processing (IALP), pp.31-34, 2010. [11] M. Rabbani, K. M. R. Alam, M. Islam, “A new verb based approach for English to Bangla machine translation”, Proc. International Conference on Informatics, Electronics & Vision (ICIEV), pp. 1-6, 2014. [12] S. K. Naskar and S. Bandyopadhyay, “A</s>
|
<s>phrasal EBMT system for translating English to Bengali, Proc. of the Workshop on Language, Artificial Intelligence and Computer Science for Natural Language Processing Applications (LAICS-NLP), 2006. [13] D. Saha and S. Bandyopadhyay, “A semantics-based English-Bengali EBMT system for translating news headlines”, MT SUMMIT X, Phuket, Thailand 2005. [14] J. F. Islam and M. M. Mia, Adapting Rule Based Machine Translation from English to Bangla: An Easy Solution for Translation, Lap Lambert Academic Publishing, 2012. [15] M. Z. Islam, English to Bangla Phrase-Based Statistical Machine Translation, Master Thesis, Department of Computational Linguistics, Saarland University, Germany, 2009. [16] M. M. Hoque and M. M. Ali, “Semantic features and redundancy rules for analyzing Bangla sentences”, In Proc. International Conference on Computer and Information Technology (ICCIT’05), Dhaka, Bangladesh, pp. 1198-1201, vol. 4, 2005. [17] M. F. Mridha, A. K. Saha, J. K. Das, “New approach of solving semantic ambiguity problem of Bangla root words using universal networking language (UNL)”, Proc. International Conference on Informatics, Electronics & Vision (ICIEV), pp. 1-6, 2014. [18] M. A . Islam, K. M. A. Hasan, M. M. Rahman, “Basic HPSG structure for Bangla grammar”, International Conference on Computer and Information Technology (ICCIT), pp.185-189, 2012. [19] S. D. Richardson, W. B. Dolan, and L. Vanderwende, “MindNet: acquiring and structuring semantic information from text” , Proc. COLING-ACL, pp. 1098-1102, 1998. [20] M. M. Hoque, M. J. Rahman and P. K. Dhar, “Lexical semantics: A new approach to analyze the Bangla sentence with semantic features”, Proc. International Conference on Information and Communication Technology (ICCIT), Dhaka, Bangladesh, pp. 87-91, vol. 1, 2007. [21] A. J. Thomson and A. V. Martinet, A Practical English Grammar, Green View Publishers, Dhaka, Bangladesh, 2001. [22] A. Trujillo, Translation Engines: Techniques for Machine Translation, Springer-verlag, London, 1992. [23] H. Azad, Bakkyatotto(�����������), Dhaka University, Dhaka, Second Edition, 1994. /ASCII85EncodePages false /AllowTransparency false /AutoPositionEPSFiles true /AutoRotatePages /None /Binding /Left /CalGrayProfile (Gray Gamma 2.2) /CalRGBProfile (sRGB IEC61966-2.1) /CalCMYKProfile (U.S. Web Coated \050SWOP\051 v2) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Error /CompatibilityLevel 1.7 /CompressObjects /Off /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages true /CreateJobTicket false /DefaultRenderingIntent /Default /DetectBlends true /DetectCurves 0.0000 /ColorConversionStrategy /LeaveColorUnchanged /DoThumbnails false /EmbedAllFonts true /EmbedOpenType false /ParseICCProfilesInComments true /EmbedJobOptions true /DSCReportingLevel 0 /EmitDSCWarnings false /EndPage -1 /ImageMemory 1048576 /LockDistillerParams true /MaxSubsetPct 100 /Optimize true /OPM 0 /ParseDSCComments false /ParseDSCCommentsForDocInfo false /PreserveCopyPage true /PreserveDICMYKValues true /PreserveEPSInfo false /PreserveFlatness true /PreserveHalftoneInfo true /PreserveOPIComments false /PreserveOverprintSettings true /StartPage 1 /SubsetFonts true /TransferFunctionInfo /Remove /UCRandBGInfo /Preserve /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true /AbadiMT-CondensedLight /ACaslon-Italic /ACaslon-Regular /ACaslon-Semibold /ACaslon-SemiboldItalic /AdobeArabic-Bold /AdobeArabic-BoldItalic /AdobeArabic-Italic /AdobeArabic-Regular /AdobeHebrew-Bold /AdobeHebrew-BoldItalic /AdobeHebrew-Italic /AdobeHebrew-Regular /AdobeHeitiStd-Regular /AdobeMingStd-Light /AdobeMyungjoStd-Medium /AdobePiStd /AdobeSansMM /AdobeSerifMM /AdobeSongStd-Light /AdobeThai-Bold /AdobeThai-BoldItalic /AdobeThai-Italic /AdobeThai-Regular /AGaramond-Bold /AGaramond-BoldItalic /AGaramond-Italic /AGaramond-Regular /AGaramond-Semibold /AGaramond-SemiboldItalic /AgencyFB-Bold /AgencyFB-Reg /AGOldFace-Outline /AharoniBold /Algerian /Americana /Americana-ExtraBold /AndaleMono /AndaleMonoIPA /AngsanaNew /AngsanaNew-Bold /AngsanaNew-BoldItalic /AngsanaNew-Italic /AngsanaUPC /AngsanaUPC-Bold /AngsanaUPC-BoldItalic /AngsanaUPC-Italic /Anna /ArialAlternative /ArialAlternativeSymbol /Arial-Black /Arial-BlackItalic /Arial-BoldItalicMT /Arial-BoldMT /Arial-ItalicMT /ArialMT /ArialMT-Black /ArialNarrow /ArialNarrow-Bold /ArialNarrow-BoldItalic /ArialNarrow-Italic /ArialRoundedMTBold /ArialUnicodeMS /ArrusBT-Bold /ArrusBT-BoldItalic /ArrusBT-Italic /ArrusBT-Roman /AvantGarde-Book /AvantGarde-BookOblique /AvantGarde-Demi /AvantGarde-DemiOblique /AvantGardeITCbyBT-Book /AvantGardeITCbyBT-BookOblique /BakerSignet /BankGothicBT-Medium /Barmeno-Bold /Barmeno-ExtraBold /Barmeno-Medium /Barmeno-Regular /Baskerville /BaskervilleBE-Italic /BaskervilleBE-Medium /BaskervilleBE-MediumItalic /BaskervilleBE-Regular /Baskerville-Bold /Baskerville-BoldItalic /Baskerville-Italic /BaskOldFace /Batang /BatangChe /Bauhaus93 /Bellevue /BellGothicStd-Black /BellGothicStd-Bold</s>
|
<s>/BellGothicStd-Light /BellMT /BellMTBold /BellMTItalic /BerlingAntiqua-Bold /BerlingAntiqua-BoldItalic /BerlingAntiqua-Italic /BerlingAntiqua-Roman /BerlinSansFB-Bold /BerlinSansFBDemi-Bold /BerlinSansFB-Reg /BernardMT-Condensed /BernhardModernBT-Bold /BernhardModernBT-BoldItalic /BernhardModernBT-Italic /BernhardModernBT-Roman /BiffoMT /BinnerD /BinnerGothic /BlackadderITC-Regular /Blackoak /blex /blsy /Bodoni /Bodoni-Bold /Bodoni-BoldItalic /Bodoni-Italic /BodoniMT /BodoniMTBlack /BodoniMTBlack-Italic /BodoniMT-Bold /BodoniMT-BoldItalic /BodoniMTCondensed /BodoniMTCondensed-Bold /BodoniMTCondensed-BoldItalic /BodoniMTCondensed-Italic /BodoniMT-Italic /BodoniMTPosterCompressed /Bodoni-Poster /Bodoni-PosterCompressed /BookAntiqua /BookAntiqua-Bold /BookAntiqua-BoldItalic /BookAntiqua-Italic /Bookman-Demi /Bookman-DemiItalic /Bookman-Light /Bookman-LightItalic /BookmanOldStyle /BookmanOldStyle-Bold /BookmanOldStyle-BoldItalic /BookmanOldStyle-Italic /BookshelfSymbolOne-Regular /BookshelfSymbolSeven /BookshelfSymbolThree-Regular /BookshelfSymbolTwo-Regular /Botanical /Boton-Italic /Boton-Medium /Boton-MediumItalic /Boton-Regular /Boulevard /BradleyHandITC /Braggadocio /BritannicBold /Broadway /BrowalliaNew /BrowalliaNew-Bold /BrowalliaNew-BoldItalic /BrowalliaNew-Italic /BrowalliaUPC /BrowalliaUPC-Bold /BrowalliaUPC-BoldItalic /BrowalliaUPC-Italic /BrushScript /BrushScriptMT /CaflischScript-Bold /CaflischScript-Regular /Calibri /Calibri-Bold /Calibri-BoldItalic /Calibri-Italic /CalifornianFB-Bold /CalifornianFB-Italic /CalifornianFB-Reg /CalisMTBol /CalistoMT /CalistoMT-BoldItalic /CalistoMT-Italic /Cambria /Cambria-Bold /Cambria-BoldItalic /Cambria-Italic /CambriaMath /Candara /Candara-Bold /Candara-BoldItalic /Candara-Italic /Carta /CaslonOpenfaceBT-Regular /Castellar /CastellarMT /Centaur /Centaur-Italic /Century /CenturyGothic /CenturyGothic-Bold /CenturyGothic-BoldItalic /CenturyGothic-Italic /CenturySchL-Bold /CenturySchL-BoldItal /CenturySchL-Ital /CenturySchL-Roma /CenturySchoolbook /CenturySchoolbook-Bold /CenturySchoolbook-BoldItalic /CenturySchoolbook-Italic /CGTimes-Bold /CGTimes-BoldItalic /CGTimes-Italic /CGTimes-Regular /CharterBT-Bold /CharterBT-BoldItalic /CharterBT-Italic /CharterBT-Roman /CheltenhamITCbyBT-Bold /CheltenhamITCbyBT-BoldItalic /CheltenhamITCbyBT-Book /CheltenhamITCbyBT-BookItalic /Chiller-Regular /Cmb10 /CMB10 /Cmbsy10 /CMBSY10 /CMBSY5 /CMBSY6 /CMBSY7 /CMBSY8 /CMBSY9 /Cmbx10 /CMBX10 /Cmbx12 /CMBX12 /Cmbx5 /CMBX5 /Cmbx6 /CMBX6 /Cmbx7 /CMBX7 /Cmbx8 /CMBX8 /Cmbx9 /CMBX9 /Cmbxsl10 /CMBXSL10 /Cmbxti10 /CMBXTI10 /Cmcsc10 /CMCSC10 /Cmcsc8 /CMCSC8 /Cmcsc9 /CMCSC9 /Cmdunh10 /CMDUNH10 /Cmex10 /CMEX10 /CMEX7 /CMEX8 /CMEX9 /Cmff10 /CMFF10 /Cmfi10 /CMFI10 /Cmfib8 /CMFIB8 /Cminch /CMINCH /Cmitt10 /CMITT10 /Cmmi10 /CMMI10 /Cmmi12 /CMMI12 /Cmmi5 /CMMI5 /Cmmi6 /CMMI6 /Cmmi7 /CMMI7 /Cmmi8 /CMMI8 /Cmmi9 /CMMI9 /Cmmib10 /CMMIB10 /CMMIB5 /CMMIB6 /CMMIB7 /CMMIB8 /CMMIB9 /Cmr10 /CMR10 /Cmr12 /CMR12 /Cmr17 /CMR17 /Cmr5 /CMR5 /Cmr6 /CMR6 /Cmr7 /CMR7 /Cmr8 /CMR8 /Cmr9 /CMR9 /Cmsl10 /CMSL10 /Cmsl12 /CMSL12 /Cmsl8 /CMSL8 /Cmsl9 /CMSL9 /Cmsltt10 /CMSLTT10 /Cmss10 /CMSS10 /Cmss12 /CMSS12 /Cmss17 /CMSS17 /Cmss8 /CMSS8 /Cmss9 /CMSS9 /Cmssbx10 /CMSSBX10 /Cmssdc10 /CMSSDC10 /Cmssi10 /CMSSI10 /Cmssi12 /CMSSI12 /Cmssi17 /CMSSI17 /Cmssi8 /CMSSI8 /Cmssi9 /CMSSI9 /Cmssq8 /CMSSQ8 /Cmssqi8 /CMSSQI8 /Cmsy10 /CMSY10 /Cmsy5 /CMSY5 /Cmsy6 /CMSY6 /Cmsy7 /CMSY7 /Cmsy8 /CMSY8 /Cmsy9 /CMSY9 /Cmtcsc10 /CMTCSC10 /Cmtex10 /CMTEX10 /Cmtex8 /CMTEX8 /Cmtex9 /CMTEX9 /Cmti10 /CMTI10 /Cmti12 /CMTI12 /Cmti7 /CMTI7 /Cmti8 /CMTI8 /Cmti9 /CMTI9 /Cmtt10 /CMTT10 /Cmtt12 /CMTT12 /Cmtt8 /CMTT8 /Cmtt9 /CMTT9 /Cmu10 /CMU10 /Cmvtt10 /CMVTT10 /ColonnaMT /Colossalis-Bold /ComicSansMS /ComicSansMS-Bold /Consolas /Consolas-Bold /Consolas-BoldItalic /Consolas-Italic /Constantia /Constantia-Bold /Constantia-BoldItalic /Constantia-Italic /CooperBlack /CopperplateGothic-Bold /CopperplateGothic-Light /Copperplate-ThirtyThreeBC /Corbel /Corbel-Bold /Corbel-BoldItalic /Corbel-Italic /CordiaNew /CordiaNew-Bold /CordiaNew-BoldItalic /CordiaNew-Italic /CordiaUPC /CordiaUPC-Bold /CordiaUPC-BoldItalic /CordiaUPC-Italic /Courier /Courier-Bold /Courier-BoldOblique /CourierNewPS-BoldItalicMT /CourierNewPS-BoldMT /CourierNewPS-ItalicMT /CourierNewPSMT /Courier-Oblique /CourierStd /CourierStd-Bold /CourierStd-BoldOblique /CourierStd-Oblique /CourierX-Bold /CourierX-BoldOblique /CourierX-Oblique /CourierX-Regular /CreepyRegular /CurlzMT /David-Bold /David-Reg /DavidTransparent /Dcb10 /Dcbx10 /Dcbxsl10 /Dcbxti10 /Dccsc10 /Dcitt10 /Dcr10 /Desdemona /DilleniaUPC /DilleniaUPCBold /DilleniaUPCBoldItalic /DilleniaUPCItalic /Dingbats /DomCasual /Dotum /DotumChe /EdwardianScriptITC /Elephant-Italic /Elephant-Regular /EngraversGothicBT-Regular /EngraversMT /EraserDust /ErasITC-Bold /ErasITC-Demi /ErasITC-Light /ErasITC-Medium /ErieBlackPSMT /ErieLightPSMT /EriePSMT /EstrangeloEdessa /Euclid /Euclid-Bold /Euclid-BoldItalic /EuclidExtra /EuclidExtra-Bold /EuclidFraktur /EuclidFraktur-Bold /Euclid-Italic /EuclidMathOne /EuclidMathOne-Bold /EuclidMathTwo /EuclidMathTwo-Bold /EuclidSymbol /EuclidSymbol-Bold /EuclidSymbol-BoldItalic /EuclidSymbol-Italic /EucrosiaUPC /EucrosiaUPCBold /EucrosiaUPCBoldItalic /EucrosiaUPCItalic /EUEX10 /EUEX7 /EUEX8 /EUEX9 /EUFB10 /EUFB5 /EUFB7 /EUFM10 /EUFM5 /EUFM7 /EURB10 /EURB5 /EURB7 /EURM10 /EURM5 /EURM7 /EuroMono-Bold /EuroMono-BoldItalic /EuroMono-Italic /EuroMono-Regular /EuroSans-Bold /EuroSans-BoldItalic /EuroSans-Italic /EuroSans-Regular /EuroSerif-Bold /EuroSerif-BoldItalic /EuroSerif-Italic /EuroSerif-Regular /EuroSig /EUSB10 /EUSB5 /EUSB7 /EUSM10 /EUSM5 /EUSM7 /FelixTitlingMT /Fences /FencesPlain /FigaroMT /FixedMiriamTransparent /FootlightMTLight /Formata-Italic /Formata-Medium /Formata-MediumItalic /Formata-Regular /ForteMT /FranklinGothic-Book /FranklinGothic-BookItalic /FranklinGothic-Demi /FranklinGothic-DemiCond /FranklinGothic-DemiItalic /FranklinGothic-Heavy /FranklinGothic-HeavyItalic /FranklinGothicITCbyBT-Book /FranklinGothicITCbyBT-BookItal /FranklinGothicITCbyBT-Demi /FranklinGothicITCbyBT-DemiItal /FranklinGothic-Medium /FranklinGothic-MediumCond /FranklinGothic-MediumItalic /FrankRuehl /FreesiaUPC /FreesiaUPCBold /FreesiaUPCBoldItalic /FreesiaUPCItalic /FreestyleScript-Regular /FrenchScriptMT /Frutiger-Black /Frutiger-BlackCn /Frutiger-BlackItalic /Frutiger-Bold /Frutiger-BoldCn /Frutiger-BoldItalic /Frutiger-Cn /Frutiger-ExtraBlackCn /Frutiger-Italic /Frutiger-Light /Frutiger-LightCn /Frutiger-LightItalic /Frutiger-Roman /Frutiger-UltraBlack /Futura-Bold /Futura-BoldOblique /Futura-Book /Futura-BookOblique /FuturaBT-Bold /FuturaBT-BoldItalic /FuturaBT-Book /FuturaBT-BookItalic /FuturaBT-Medium /FuturaBT-MediumItalic /Futura-Light /Futura-LightOblique /GalliardITCbyBT-Bold /GalliardITCbyBT-BoldItalic /GalliardITCbyBT-Italic /GalliardITCbyBT-Roman /Garamond /Garamond-Bold /Garamond-BoldCondensed /Garamond-BoldCondensedItalic /Garamond-BoldItalic /Garamond-BookCondensed /Garamond-BookCondensedItalic</s>
|
<s>/Garamond-Italic /Garamond-LightCondensed /Garamond-LightCondensedItalic /Gautami /GeometricSlab703BT-Light /GeometricSlab703BT-LightItalic /Georgia /Georgia-Bold /Georgia-BoldItalic /Georgia-Italic /GeorgiaRef /Giddyup /Giddyup-Thangs /Gigi-Regular /GillSans /GillSans-Bold /GillSans-BoldItalic /GillSans-Condensed /GillSans-CondensedBold /GillSans-Italic /GillSans-Light /GillSans-LightItalic /GillSansMT /GillSansMT-Bold /GillSansMT-BoldItalic /GillSansMT-Condensed /GillSansMT-ExtraCondensedBold /GillSansMT-Italic /GillSans-UltraBold /GillSans-UltraBoldCondensed /GloucesterMT-ExtraCondensed /Gothic-Thirteen /GoudyOldStyleBT-Bold /GoudyOldStyleBT-BoldItalic /GoudyOldStyleBT-Italic /GoudyOldStyleBT-Roman /GoudyOldStyleT-Bold /GoudyOldStyleT-Italic /GoudyOldStyleT-Regular /GoudyStout /GoudyTextMT-LombardicCapitals /GSIDefaultSymbols /Gulim /GulimChe /Gungsuh /GungsuhChe /Haettenschweiler /HarlowSolid /Harrington /Helvetica /Helvetica-Black /Helvetica-BlackOblique /Helvetica-Bold /Helvetica-BoldOblique /Helvetica-Condensed /Helvetica-Condensed-Black /Helvetica-Condensed-BlackObl /Helvetica-Condensed-Bold /Helvetica-Condensed-BoldObl /Helvetica-Condensed-Light /Helvetica-Condensed-LightObl /Helvetica-Condensed-Oblique /Helvetica-Fraction /Helvetica-Narrow /Helvetica-Narrow-Bold /Helvetica-Narrow-BoldOblique /Helvetica-Narrow-Oblique /Helvetica-Oblique /HighTowerText-Italic /HighTowerText-Reg /Humanist521BT-BoldCondensed /Humanist521BT-Light /Humanist521BT-LightItalic /Humanist521BT-RomanCondensed /Imago-ExtraBold /Impact /ImprintMT-Shadow /InformalRoman-Regular /IrisUPC /IrisUPCBold /IrisUPCBoldItalic /IrisUPCItalic /Ironwood /ItcEras-Medium /ItcKabel-Bold /ItcKabel-Book /ItcKabel-Demi /ItcKabel-Medium /ItcKabel-Ultra /JasmineUPC /JasmineUPC-Bold /JasmineUPC-BoldItalic /JasmineUPC-Italic /JoannaMT /JoannaMT-Italic /Jokerman-Regular /JuiceITC-Regular /Kartika /Kaufmann /KaufmannBT-Bold /KaufmannBT-Regular /KidTYPEPaint /KinoMT /KodchiangUPC /KodchiangUPC-Bold /KodchiangUPC-BoldItalic /KodchiangUPC-Italic /KorinnaITCbyBT-Regular /KozGoProVI-Medium /KozMinProVI-Regular /KristenITC-Regular /KunstlerScript /Latha /LatinWide /LetterGothic /LetterGothic-Bold /LetterGothic-BoldOblique /LetterGothic-BoldSlanted /LetterGothicMT /LetterGothicMT-Bold /LetterGothicMT-BoldOblique /LetterGothicMT-Oblique /LetterGothic-Slanted /LetterGothicStd /LetterGothicStd-Bold /LetterGothicStd-BoldSlanted /LetterGothicStd-Slanted /LevenimMT /LevenimMTBold /LilyUPC /LilyUPCBold /LilyUPCBoldItalic /LilyUPCItalic /Lithos-Black /Lithos-Regular /LotusWPBox-Roman /LotusWPIcon-Roman /LotusWPIntA-Roman /LotusWPIntB-Roman /LotusWPType-Roman /LucidaBright /LucidaBright-Demi /LucidaBright-DemiItalic /LucidaBright-Italic /LucidaCalligraphy-Italic /LucidaConsole /LucidaFax /LucidaFax-Demi /LucidaFax-DemiItalic /LucidaFax-Italic /LucidaHandwriting-Italic /LucidaSans /LucidaSans-Demi /LucidaSans-DemiItalic /LucidaSans-Italic /LucidaSans-Typewriter /LucidaSans-TypewriterBold /LucidaSans-TypewriterBoldOblique /LucidaSans-TypewriterOblique /LucidaSansUnicode /Lydian /Magneto-Bold /MaiandraGD-Regular /Mangal-Regular /Map-Symbols /MathA /MathB /MathC /Mathematica1 /Mathematica1-Bold /Mathematica1Mono /Mathematica1Mono-Bold /Mathematica2 /Mathematica2-Bold /Mathematica2Mono /Mathematica2Mono-Bold /Mathematica3 /Mathematica3-Bold /Mathematica3Mono /Mathematica3Mono-Bold /Mathematica4 /Mathematica4-Bold /Mathematica4Mono /Mathematica4Mono-Bold /Mathematica5 /Mathematica5-Bold /Mathematica5Mono /Mathematica5Mono-Bold /Mathematica6 /Mathematica6Bold /Mathematica6Mono /Mathematica6MonoBold /Mathematica7 /Mathematica7Bold /Mathematica7Mono /Mathematica7MonoBold /MatisseITC-Regular /MaturaMTScriptCapitals /Mesquite /Mezz-Black /Mezz-Regular /MICR /MicrosoftSansSerif /MingLiU /Minion-BoldCondensed /Minion-BoldCondensedItalic /Minion-Condensed /Minion-CondensedItalic /Minion-Ornaments /MinionPro-Bold /MinionPro-BoldIt /MinionPro-It /MinionPro-Regular /MinionPro-Semibold /MinionPro-SemiboldIt /Miriam /MiriamFixed /MiriamTransparent /Mistral /Modern-Regular /MonotypeCorsiva /MonotypeSorts /MSAM10 /MSAM5 /MSAM6 /MSAM7 /MSAM8 /MSAM9 /MSBM10 /MSBM5 /MSBM6 /MSBM7 /MSBM8 /MSBM9 /MS-Gothic /MSHei /MSLineDrawPSMT /MS-Mincho /MSOutlook /MS-PGothic /MS-PMincho /MSReference1 /MSReference2 /MSReferenceSansSerif /MSReferenceSansSerif-Bold /MSReferenceSansSerif-BoldItalic /MSReferenceSansSerif-Italic /MSReferenceSerif /MSReferenceSerif-Bold /MSReferenceSerif-BoldItalic /MSReferenceSerif-Italic /MSReferenceSpecialty /MSSong /MS-UIGothic /MT-Extra /MT-Symbol /MT-Symbol-Italic /MVBoli /Myriad-Bold /Myriad-BoldItalic /Myriad-Italic /MyriadPro-Black /MyriadPro-BlackIt /MyriadPro-Bold /MyriadPro-BoldIt /MyriadPro-It /MyriadPro-Light /MyriadPro-LightIt /MyriadPro-Regular /MyriadPro-Semibold /MyriadPro-SemiboldIt /Myriad-Roman /Narkisim /NewCenturySchlbk-Bold /NewCenturySchlbk-BoldItalic /NewCenturySchlbk-Italic /NewCenturySchlbk-Roman /NewMilleniumSchlbk-BoldItalicSH /NewsGothic /NewsGothic-Bold /NewsGothicBT-Bold /NewsGothicBT-BoldItalic /NewsGothicBT-Italic /NewsGothicBT-Roman /NewsGothic-Condensed /NewsGothic-Italic /NewsGothicMT /NewsGothicMT-Bold /NewsGothicMT-Italic /NiagaraEngraved-Reg /NiagaraSolid-Reg /NimbusMonL-Bold /NimbusMonL-BoldObli /NimbusMonL-Regu /NimbusMonL-ReguObli /NimbusRomDGR-Bold /NimbusRomDGR-BoldItal /NimbusRomDGR-Regu /NimbusRomDGR-ReguItal /NimbusRomNo9L-Medi /NimbusRomNo9L-MediItal /NimbusRomNo9L-Regu /NimbusRomNo9L-ReguItal /NimbusSanL-Bold /NimbusSanL-BoldCond /NimbusSanL-BoldCondItal /NimbusSanL-BoldItal /NimbusSanL-Regu /NimbusSanL-ReguCond /NimbusSanL-ReguCondItal /NimbusSanL-ReguItal /Nimrod /Nimrod-Bold /Nimrod-BoldItalic /Nimrod-Italic /NSimSun /Nueva-BoldExtended /Nueva-BoldExtendedItalic /Nueva-Italic /Nueva-Roman /NuptialScript /OCRA /OCRA-Alternate /OCRAExtended /OCRB /OCRB-Alternate /OfficinaSans-Bold /OfficinaSans-BoldItalic /OfficinaSans-Book /OfficinaSans-BookItalic /OfficinaSerif-Bold /OfficinaSerif-BoldItalic /OfficinaSerif-Book /OfficinaSerif-BookItalic /OldEnglishTextMT /Onyx /OnyxBT-Regular /OzHandicraftBT-Roman /PalaceScriptMT /Palatino-Bold /Palatino-BoldItalic /Palatino-Italic /PalatinoLinotype-Bold /PalatinoLinotype-BoldItalic /PalatinoLinotype-Italic /PalatinoLinotype-Roman /Palatino-Roman /PapyrusPlain /Papyrus-Regular /Parchment-Regular /Parisian /ParkAvenue /Penumbra-SemiboldFlare /Penumbra-SemiboldSans /Penumbra-SemiboldSerif /PepitaMT /Perpetua /Perpetua-Bold /Perpetua-BoldItalic /Perpetua-Italic /PerpetuaTitlingMT-Bold /PerpetuaTitlingMT-Light /PhotinaCasualBlack /Playbill /PMingLiU /Poetica-SuppOrnaments /PoorRichard-Regular /PopplLaudatio-Italic /PopplLaudatio-Medium /PopplLaudatio-MediumItalic /PopplLaudatio-Regular /PrestigeElite /Pristina-Regular /PTBarnumBT-Regular /Raavi /RageItalic /Ravie /RefSpecialty /Ribbon131BT-Bold /Rockwell /Rockwell-Bold /Rockwell-BoldItalic /Rockwell-Condensed /Rockwell-CondensedBold /Rockwell-ExtraBold /Rockwell-Italic /Rockwell-Light /Rockwell-LightItalic /Rod /RodTransparent /RunicMT-Condensed /Sanvito-Light /Sanvito-Roman /ScriptC /ScriptMTBold /SegoeUI /SegoeUI-Bold /SegoeUI-BoldItalic /SegoeUI-Italic /Serpentine-BoldOblique /ShelleyVolanteBT-Regular /ShowcardGothic-Reg /Shruti /SimHei /SimSun /SimSun-PUA /SnapITC-Regular /StandardSymL /Stencil /StoneSans /StoneSans-Bold /StoneSans-BoldItalic /StoneSans-Italic /StoneSans-Semibold /StoneSans-SemiboldItalic /Stop /Swiss721BT-BlackExtended /Sylfaen /Symbol /SymbolMT /Tahoma /Tahoma-Bold /Tci1 /Tci1Bold /Tci1BoldItalic /Tci1Italic /Tci2 /Tci2Bold /Tci2BoldItalic /Tci2Italic /Tci3 /Tci3Bold /Tci3BoldItalic /Tci3Italic /Tci4 /Tci4Bold /Tci4BoldItalic /Tci4Italic /TechnicalItalic /TechnicalPlain /Tekton /Tekton-Bold /TektonMM /Tempo-HeavyCondensed /Tempo-HeavyCondensedItalic /TempusSansITC /Times-Bold /Times-BoldItalic /Times-BoldItalicOsF /Times-BoldSC /Times-ExtraBold /Times-Italic /Times-ItalicOsF /TimesNewRomanMT-ExtraBold /TimesNewRomanPS-BoldItalicMT /TimesNewRomanPS-BoldMT /TimesNewRomanPS-ItalicMT /TimesNewRomanPSMT /Times-Roman /Times-RomanSC /Trajan-Bold /Trebuchet-BoldItalic /TrebuchetMS /TrebuchetMS-Bold /TrebuchetMS-Italic /Tunga-Regular /TwCenMT-Bold /TwCenMT-BoldItalic /TwCenMT-Condensed /TwCenMT-CondensedBold /TwCenMT-CondensedExtraBold /TwCenMT-CondensedMedium /TwCenMT-Italic /TwCenMT-Regular /Univers-Bold /Univers-BoldItalic /UniversCondensed-Bold /UniversCondensed-BoldItalic /UniversCondensed-Medium /UniversCondensed-MediumItalic /Univers-Medium /Univers-MediumItalic /URWBookmanL-DemiBold /URWBookmanL-DemiBoldItal /URWBookmanL-Ligh /URWBookmanL-LighItal /URWChanceryL-MediItal /URWGothicL-Book /URWGothicL-BookObli /URWGothicL-Demi /URWGothicL-DemiObli /URWPalladioL-Bold /URWPalladioL-BoldItal /URWPalladioL-Ital /URWPalladioL-Roma /USPSBarCode /VAGRounded-Black /VAGRounded-Bold /VAGRounded-Light /VAGRounded-Thin</s>
|
<s>/Verdana /Verdana-Bold /Verdana-BoldItalic /Verdana-Italic /VerdanaRef /VinerHandITC /Viva-BoldExtraExtended /Vivaldii /Viva-LightCondensed /Viva-Regular /VladimirScript /Vrinda /Webdings /Westminster /Willow /Wingdings2 /Wingdings3 /Wingdings-Regular /WNCYB10 /WNCYI10 /WNCYR10 /WNCYSC10 /WNCYSS10 /WoodtypeOrnaments-One /WoodtypeOrnaments-Two /WP-ArabicScriptSihafa /WP-ArabicSihafa /WP-BoxDrawing /WP-CyrillicA /WP-CyrillicB /WP-GreekCentury /WP-GreekCourier /WP-GreekHelve /WP-HebrewDavid /WP-IconicSymbolsA /WP-IconicSymbolsB /WP-Japanese /WP-MathA /WP-MathB /WP-MathExtendedA /WP-MathExtendedB /WP-MultinationalAHelve /WP-MultinationalARoman /WP-MultinationalBCourier /WP-MultinationalBHelve /WP-MultinationalBRoman /WP-MultinationalCourier /WP-Phonetic /WPTypographicSymbols /XYATIP10 /XYBSQL10 /XYBTIP10 /XYCIRC10 /XYCMAT10 /XYCMBT10 /XYDASH10 /XYEUAT10 /XYEUBT10 /ZapfChancery-MediumItalic /ZapfDingbats /ZapfHumanist601BT-Bold /ZapfHumanist601BT-BoldItalic /ZapfHumanist601BT-Demi /ZapfHumanist601BT-DemiItalic /ZapfHumanist601BT-Italic /ZapfHumanist601BT-Roman /ZWAdobeF /NeverEmbed [ true /AntiAliasColorImages false /CropColorImages true /ColorImageMinResolution 200 /ColorImageMinResolutionPolicy /OK /DownsampleColorImages true /ColorImageDownsampleType /Bicubic /ColorImageResolution 300 /ColorImageDepth -1 /ColorImageMinDownsampleDepth 1 /ColorImageDownsampleThreshold 2.00333 /EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages true /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /ColorImageDict << /QFactor 1.30 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 10 /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 10 /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 200 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 2.00333 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /GrayImageDict << /QFactor 1.30 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 10 /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 10 /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 400 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 600 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.00167 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1 /AllowPSXObjects false /CheckCompliance [ /None /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXOutputIntentProfile (None) /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False /CreateJDFFile false /Description << /ARA <FEFF06270633062A062E062F0645002006470630064700200627064406250639062F0627062F0627062A002006440625064606340627062100200648062B062706260642002000410064006F00620065002000500044004600200645062A064806270641064206290020064406440639063106360020063906440649002006270644063406270634062900200648064506460020062E06440627064400200631063306270626064400200627064406280631064A062F002006270644062506440643062A063106480646064A00200648064506460020062E064406270644002006350641062D0627062A0020062706440648064A0628061B0020064A06450643064600200641062A062D00200648062B0627062606420020005000440046002006270644064506460634062306290020062806270633062A062E062F062706450020004100630072006F0062006100740020064800410064006F006200650020005200650061006400650072002006250635062F0627063100200035002E0030002006480627064406250635062F062706310627062A0020062706440623062D062F062B002E> /BGR <FEFF04180437043f043e043b043704320430043904420435002004420435043704380020043d0430044104420440043e0439043a0438002c00200437043000200434043000200441044a0437043404300432043004420435002000410064006f00620065002000500044004600200434043e043a0443043c0435043d04420438002c0020043c0430043a04410438043c0430043b043d043e0020043f044004380433043e04340435043d04380020043704300020043f043e043a0430043704320430043d04350020043d043000200435043a04400430043d0430002c00200435043b0435043a04420440043e043d043d04300020043f043e044904300020043800200418043d044204350440043d04350442002e002000200421044a04370434043004340435043d043804420435002000500044004600200434043e043a0443043c0435043d044204380020043c043e0433043004420020043404300020044104350020043e0442043204300440044f0442002004410020004100630072006f00620061007400200438002000410064006f00620065002000520065006100640065007200200035002e00300020043800200441043b0435043404320430044904380020043204350440044104380438002e> /CHS <FEFF4f7f75288fd94e9b8bbe5b9a521b5efa7684002000410064006f006200650020005000440046002065876863900275284e8e5c4f5e55663e793a3001901a8fc775355b5090ae4ef653d190014ee553ca901a8fc756e072797f5153d15e03300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c676562535f00521b5efa768400200050004400460020658768633002> /CHT <FEFF4f7f752890194e9b8a2d7f6e5efa7acb7684002000410064006f006200650020005000440046002065874ef69069752865bc87a25e55986f793a3001901a904e96fb5b5090f54ef650b390014ee553ca57287db2969b7db28def4e0a767c5e03300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c4f86958b555f5df25efa7acb76840020005000440046002065874ef63002> /CZE <FEFF005400610074006f0020006e006100730074006100760065006e00ed00200070006f0075017e0069006a007400650020006b0020007600790074007600e101590065006e00ed00200064006f006b0075006d0065006e0074016f002000410064006f006200650020005000440046002c0020006b00740065007200e90020007300650020006e0065006a006c00e90070006500200068006f006400ed002000700072006f0020007a006f006200720061007a006f007600e1006e00ed0020006e00610020006f006200720061007a006f007600630065002c00200070006f007300ed006c00e1006e00ed00200065002d006d00610069006c0065006d00200061002000700072006f00200069006e007400650072006e00650074002e002000200056007900740076006f01590065006e00e900200064006f006b0075006d0065006e007400790020005000440046002000620075006400650020006d006f017e006e00e90020006f007400650076015900ed007400200076002000700072006f006700720061006d0065006300680020004100630072006f00620061007400200061002000410064006f00620065002000520065006100640065007200200035002e0030002000610020006e006f0076011b006a016100ed00630068002e> /DAN <FEFF004200720075006700200069006e0064007300740069006c006c0069006e006700650072006e0065002000740069006c0020006100740020006f007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400650072002c0020006400650072002000620065006400730074002000650067006e006500720020007300690067002000740069006c00200073006b00e60072006d007600690073006e0069006e0067002c00200065002d006d00610069006c0020006f006700200069006e007400650072006e00650074002e0020004400650020006f007000720065007400740065006400650020005000440046002d0064006f006b0075006d0065006e0074006500720020006b0061006e002000e50062006e00650073002000690020004100630072006f00620061007400200065006c006c006500720020004100630072006f006200610074002000520065006100640065007200200035002e00300020006f00670020006e0079006500720065002e> /DEU <FEFF00560065007200770065006e00640065006e0020005300690065002000640069006500730065002000450069006e007300740065006c006c0075006e00670065006e0020007a0075006d002000450072007300740065006c006c0065006e00200076006f006e002000410064006f006200650020005000440046002d0044006f006b0075006d0065006e00740065006e002c00200064006900650020006600fc00720020006400690065002000420069006c006400730063006800690072006d0061006e007a0065006900670065002c00200045002d004d00610069006c0020006f006400650072002000640061007300200049006e007400650072006e00650074002000760065007200770065006e006400650074002000770065007200640065006e00200073006f006c006c0065006e002e002000450072007300740065006c006c007400650020005000440046002d0044006f006b0075006d0065006e007400650020006b00f6006e006e0065006e0020006d006900740020004100630072006f00620061007400200075006e0064002000410064006f00620065002000520065006100640065007200200035002e00300020006f0064006500720020006800f600680065007200200067006500f600660066006e00650074002000770065007200640065006e002e> /ESP <FEFF005500740069006c0069006300650020006500730074006100200063006f006e0066006900670075007200610063006900f3006e0020007000610072006100200063007200650061007200200064006f00630075006d0065006e0074006f00730020005000440046002000640065002000410064006f0062006500200061006400650063007500610064006f007300200070006100720061002000760069007300750061006c0069007a00610063006900f3006e00200065006e002000700061006e00740061006c006c0061002c00200063006f007200720065006f00200065006c006500630074007200f3006e00690063006f0020006500200049006e007400650072006e00650074002e002000530065002000700075006500640065006e00200061006200720069007200200064006f00630075006d0065006e0074006f00730020005000440046002000630072006500610064006f007300200063006f006e0020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e003000200079002000760065007200730069006f006e0065007300200070006f00730074006500720069006f007200650073002e> /ETI <FEFF004b00610073007500740061006700650020006e0065006900640020007300e400740074006500690064002000730065006c006c0069007300740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e00740069006400650020006c006f006f006d006900730065006b0073002c0020006d0069007300200073006f006200690076006100640020006b00f500690067006500200070006100720065006d0069006e006900200065006b007200610061006e0069006c0020006b007500760061006d006900730065006b0073002c00200065002d0070006f0073007400690067006100200073006100610074006d006900730065006b00730020006a006100200049006e007400650072006e00650074006900730020006100760061006c00640061006d006900730065006b0073002e00200020004c006f006f0064007500640020005000440046002d0064006f006b0075006d0065006e00740065002000730061006100740065002000610076006100640061002000700072006f006700720061006d006d006900640065006700610020004100630072006f0062006100740020006e0069006e0067002000410064006f00620065002000520065006100640065007200200035002e00300020006a00610020007500750065006d006100740065002000760065007200730069006f006f006e00690064006500670061002e> /FRA <FEFF005500740069006c006900730065007a00200063006500730020006f007000740069006f006e00730020006100660069006e00200064006500200063007200e900650072002000640065007300200064006f00630075006d0065006e00740073002000410064006f006200650020005000440046002000640065007300740069006e00e90073002000e000200049006e007400650072006e00650074002c002000e0002000ea007400720065002000610066006600690063006800e90073002000e00020006c002700e9006300720061006e002000650074002000e0002000ea00740072006500200065006e0076006f007900e9007300200070006100720020006d006500730073006100670065007200690065002e0020004c0065007300200064006f00630075006d0065006e00740073002000500044004600200063007200e900e90073002000700065007500760065006e0074002000ea0074007200650020006f007500760065007200740073002000640061006e00730020004100630072006f006200610074002c002000610069006e00730069002000710075002700410064006f00620065002000520065006100640065007200200035002e0030002000650074002000760065007200730069006f006e007300200075006c007400e90072006900650075007200650073002e> /GRE <FEFF03a703c103b703c303b903bc03bf03c003bf03b903ae03c303c403b5002003b103c503c403ad03c2002003c403b903c2002003c103c503b803bc03af03c303b503b903c2002003b303b903b1002003bd03b1002003b403b703bc03b903bf03c503c103b303ae03c303b503c403b5002003ad03b303b303c103b103c603b1002000410064006f006200650020005000440046002003c003bf03c5002003b503af03bd03b103b9002003ba03b103c42019002003b503be03bf03c703ae03bd002003ba03b103c403ac03bb03bb03b703bb03b1002003b303b903b1002003c003b103c103bf03c503c303af03b103c303b7002003c303c403b703bd002003bf03b803cc03bd03b7002c002003b303b903b100200065002d006d00610069006c002c002003ba03b103b9002003b303b903b1002003c403bf0020039403b903b1002d03b403af03ba03c403c503bf002e0020002003a403b10020005000440046002003ad03b303b303c103b103c603b1002003c003bf03c5002003ad03c703b503c403b5002003b403b703bc03b903bf03c503c103b303ae03c303b503b9002003bc03c003bf03c103bf03cd03bd002003bd03b1002003b103bd03bf03b903c703c403bf03cd03bd002003bc03b5002003c403bf0020004100630072006f006200610074002c002003c403bf002000410064006f00620065002000520065006100640065007200200035002e0030002003ba03b103b9002003bc03b503c403b103b303b503bd03ad03c303c403b503c103b503c2002003b503ba03b403cc03c303b503b903c2002e> /HEB <FEFF05D405E905EA05DE05E905D5002005D105D405D205D305E805D505EA002005D005DC05D4002005DB05D305D9002005DC05D905E605D505E8002005DE05E105DE05DB05D9002000410064006F006200650020005000440046002005D405DE05D505EA05D005DE05D905DD002005DC05EA05E605D505D205EA002005DE05E105DA002C002005D305D505D005E8002005D005DC05E705D805E805D505E005D9002005D505D405D005D905E005D805E805E005D8002E002005DE05E105DE05DB05D90020005000440046002005E905E005D505E605E805D5002005E005D905EA05E005D905DD002005DC05E405EA05D905D705D4002005D105D005DE05E605E205D505EA0020004100630072006F006200610074002005D5002D00410064006F00620065002000520065006100640065007200200035002E0030002005D505D205E805E105D005D505EA002005DE05EA05E705D305DE05D505EA002005D905D505EA05E8002E002D0033002C002005E205D905D905E005D5002005D105DE05D305E805D905DA002005DC05DE05E905EA05DE05E9002005E905DC0020004100630072006F006200610074002E002005DE05E105DE05DB05D90020005000440046002005E905E005D505E605E805D5002005E005D905EA05E005D905DD002005DC05E405EA05D905D705D4002005D105D005DE05E605E205D505EA0020004100630072006F006200610074002005D5002D00410064006F00620065002000520065006100640065007200200035002E0030002005D505D205E805E105D005D505EA002005DE05EA05E705D305DE05D505EA002005D905D505EA05E8002E> /HRV <FEFF005a00610020007300740076006100720061006e006a0065002000500044004600200064006f006b0075006d0065006e0061007400610020006e0061006a0070006f0067006f0064006e0069006a006900680020007a00610020007000720069006b0061007a0020006e00610020007a00610073006c006f006e0075002c00200065002d0070006f0161007400690020006900200049006e007400650072006e0065007400750020006b006f00720069007300740069007400650020006f0076006500200070006f0073007400610076006b0065002e00200020005300740076006f00720065006e0069002000500044004600200064006f006b0075006d0065006e007400690020006d006f006700750020007300650020006f00740076006f00720069007400690020004100630072006f00620061007400200069002000410064006f00620065002000520065006100640065007200200035002e0030002000690020006b00610073006e0069006a0069006d0020007600650072007a0069006a0061006d0061002e> /HUN <FEFF00410020006b00e9007000650072006e00790151006e0020006d00650067006a0065006c0065006e00ed007400e9007300680065007a002c00200065002d006d00610069006c002000fc007a0065006e006500740065006b00620065006e002000e90073002000200049006e007400650072006e006500740065006e0020006800610073007a006e00e1006c00610074006e0061006b0020006c006500670069006e006b00e1006200620020006d0065006700660065006c0065006c0151002000410064006f00620065002000500044004600200064006f006b0075006d0065006e00740075006d006f006b0061007400200065007a0065006b006b0065006c0020006100200062006500e1006c006c00ed007400e10073006f006b006b0061006c0020006b00e90073007a00ed0074006800650074002e0020002000410020006c00e90074007200650068006f007a006f00740074002000500044004600200064006f006b0075006d0065006e00740075006d006f006b00200061007a0020004100630072006f006200610074002000e9007300200061007a002000410064006f00620065002000520065006100640065007200200035002e0030002c0020007600610067007900200061007a002000610074007400f3006c0020006b00e9007301510062006200690020007600650072007a006900f3006b006b0061006c0020006e00790069007400680061007400f3006b0020006d00650067002e> /ITA <FEFF005500740069006c0069007a007a006100720065002000710075006500730074006500200069006d0070006f007300740061007a0069006f006e00690020007000650072002000630072006500610072006500200064006f00630075006d0065006e00740069002000410064006f00620065002000500044004600200070006900f9002000610064006100740074006900200070006500720020006c0061002000760069007300750061006c0069007a007a0061007a0069006f006e0065002000730075002000730063006800650072006d006f002c0020006c006100200070006f00730074006100200065006c0065007400740072006f006e0069006300610020006500200049006e007400650072006e00650074002e0020004900200064006f00630075006d0065006e007400690020005000440046002000630072006500610074006900200070006f00730073006f006e006f0020006500730073006500720065002000610070006500720074006900200063006f006e0020004100630072006f00620061007400200065002000410064006f00620065002000520065006100640065007200200035002e003000200065002000760065007200730069006f006e006900200073007500630063006500730073006900760065002e> /JPN <FEFF753b97624e0a3067306e8868793a3001307e305f306f96fb5b5030e130fc30eb308430a430f330bf30fc30cd30c330c87d4c7531306790014fe13059308b305f3081306e002000410064006f0062006500200050004400460020658766f8306e4f5c6210306b9069305730663044307e305930023053306e8a2d5b9a30674f5c62103055308c305f0020005000440046002030d530a130a430eb306f3001004100630072006f0062006100740020304a30883073002000410064006f00620065002000520065006100640065007200200035002e003000204ee5964d3067958b304f30533068304c3067304d307e305930023053306e8a2d5b9a3067306f30d530a930f330c8306e57cb30818fbc307f3092884c306a308f305a300130d530a130a430eb30b530a430ba306f67005c0f9650306b306a308a307e30593002> /KOR <FEFFc7740020c124c815c7440020c0acc6a9d558c5ec0020d654ba740020d45cc2dc002c0020c804c7900020ba54c77c002c0020c778d130b137c5d00020ac00c7a50020c801d569d55c002000410064006f0062006500200050004400460020bb38c11cb97c0020c791c131d569b2c8b2e4002e0020c774b807ac8c0020c791c131b41c00200050004400460020bb38c11cb2940020004100630072006f0062006100740020bc0f002000410064006f00620065002000520065006100640065007200200035002e00300020c774c0c1c5d0c11c0020c5f40020c2180020c788c2b5b2c8b2e4002e> /LTH <FEFF004e006100750064006f006b0069007400650020016100690075006f007300200070006100720061006d006500740072007500730020006e006f0072011700640061006d00690020006b0075007200740069002000410064006f00620065002000500044004600200064006f006b0075006d0065006e007400750073002c0020006b00750072006900650020006c0061006200690061007500730069006100690020007000720069007400610069006b00790074006900200072006f006400790074006900200065006b00720061006e0065002c00200065006c002e002000700061016100740075006900200061007200200069006e007400650072006e0065007400750069002e0020002000530075006b0075007200740069002000500044004600200064006f006b0075006d0065006e007400610069002000670061006c006900200062016b007400690020006100740069006400610072006f006d00690020004100630072006f006200610074002000690072002000410064006f00620065002000520065006100640065007200200035002e0030002000610072002000760117006c00650073006e0117006d00690073002000760065007200730069006a006f006d00690073002e> /LVI <FEFF0049007a006d0061006e0074006f006a00690065007400200161006f00730020006900650073007400610074012b006a0075006d00750073002c0020006c0061006900200076006500690064006f00740075002000410064006f00620065002000500044004600200064006f006b0075006d0065006e007400750073002c0020006b006100730020006900720020012b00700061016100690020007000690065006d01130072006f007400690020007201010064012b01610061006e0061006900200065006b00720101006e0101002c00200065002d00700061007300740061006d00200075006e00200069006e007400650072006e006500740061006d002e00200049007a0076006500690064006f006a006900650074002000500044004600200064006f006b0075006d0065006e007400750073002c0020006b006f002000760061007200200061007400760113007200740020006100720020004100630072006f00620061007400200075006e002000410064006f00620065002000520065006100640065007200200035002e0030002c0020006b0101002000610072012b00200074006f0020006a00610075006e0101006b0101006d002000760065007200730069006a0101006d002e> /NLD (Gebruik deze instellingen om Adobe PDF-documenten te maken die zijn geoptimaliseerd voor weergave op een beeldscherm, e-mail en internet. De gemaakte PDF-documenten kunnen worden geopend met Acrobat en Adobe Reader 5.0 en hoger.) /NOR <FEFF004200720075006b00200064006900730073006500200069006e006e007300740069006c006c0069006e00670065006e0065002000740069006c002000e50020006f0070007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e00740065007200200073006f006d00200065007200200062006500730074002000650067006e0065007400200066006f007200200073006b006a00650072006d007600690073006e0069006e0067002c00200065002d0070006f007300740020006f006700200049006e007400650072006e006500740074002e0020005000440046002d0064006f006b0075006d0065006e00740065006e00650020006b0061006e002000e50070006e00650073002000690020004100630072006f00620061007400200065006c006c00650072002000410064006f00620065002000520065006100640065007200200035002e003000200065006c006c00650072002000730065006e006500720065002e> /POL <FEFF0055007300740061007700690065006e0069006100200064006f002000740077006f0072007a0065006e0069006100200064006f006b0075006d0065006e007400f300770020005000440046002000700072007a0065007a006e00610063007a006f006e00790063006800200064006f002000770079015b0077006900650074006c0061006e006900610020006e006100200065006b00720061006e00690065002c0020007700790073007901420061006e0069006100200070006f0063007a0074010500200065006c0065006b00740072006f006e00690063007a006e01050020006f00720061007a00200064006c006100200069006e007400650072006e006500740075002e002000200044006f006b0075006d0065006e0074007900200050004400460020006d006f017c006e00610020006f007400770069006500720061010700200077002000700072006f006700720061006d006900650020004100630072006f00620061007400200069002000410064006f00620065002000520065006100640065007200200035002e0030002000690020006e006f00770073007a0079006d002e> /PTB <FEFF005500740069006c0069007a006500200065007300730061007300200063006f006e00660069006700750072006100e700f50065007300200064006500200066006f0072006d00610020006100200063007200690061007200200064006f00630075006d0065006e0074006f0073002000410064006f0062006500200050004400460020006d00610069007300200061006400650071007500610064006f00730020007000610072006100200065007800690062006900e700e3006f0020006e0061002000740065006c0061002c0020007000610072006100200065002d006d00610069006c007300200065002000700061007200610020006100200049006e007400650072006e00650074002e0020004f007300200064006f00630075006d0065006e0074006f00730020005000440046002000630072006900610064006f007300200070006f00640065006d0020007300650072002000610062006500720074006f007300200063006f006d0020006f0020004100630072006f006200610074002000650020006f002000410064006f00620065002000520065006100640065007200200035002e0030002000650020007600650072007300f50065007300200070006f00730074006500720069006f007200650073002e> /RUM <FEFF005500740069006c0069007a00610163006900200061006300650073007400650020007300650074010300720069002000700065006e007400720075002000610020006300720065006100200064006f00630075006d0065006e00740065002000410064006f006200650020005000440046002000610064006500630076006100740065002000700065006e0074007200750020006100660069015f006100720065006100200070006500200065006300720061006e002c0020007400720069006d0069007400650072006500610020007000720069006e00200065002d006d00610069006c0020015f0069002000700065006e00740072007500200049006e007400650072006e00650074002e002000200044006f00630075006d0065006e00740065006c00650020005000440046002000630072006500610074006500200070006f00740020006600690020006400650073006300680069007300650020006300750020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e00300020015f00690020007600650072007300690075006e0069006c006500200075006c0074006500720069006f006100720065002e> /RUS <FEFF04180441043f043e043b044c04370443043904420435002004340430043d043d044b04350020043d0430044104420440043e0439043a043800200434043b044f00200441043e043704340430043d0438044f00200434043e043a0443043c0435043d0442043e0432002000410064006f006200650020005000440046002c0020043c0430043a04410438043c0430043b044c043d043e0020043f043e04340445043e0434044f04490438044500200434043b044f0020044d043a04400430043d043d043e0433043e0020043f0440043e0441043c043e044204400430002c0020043f0435044004350441044b043b043a04380020043f043e0020044d043b0435043a04420440043e043d043d043e04390020043f043e044704420435002004380020044004300437043c043504490435043d0438044f0020043200200418043d044204350440043d043504420435002e002000200421043e043704340430043d043d044b04350020005000440046002d0434043e043a0443043c0435043d0442044b0020043c043e0436043d043e0020043e0442043a0440044b043204300442044c002004410020043f043e043c043e0449044c044e0020004100630072006f00620061007400200438002000410064006f00620065002000520065006100640065007200200035002e00300020043800200431043e043b043504350020043f043e04370434043d043804450020043204350440044104380439002e> /SKY <FEFF0054006900650074006f0020006e006100730074006100760065006e0069006100200070006f0075017e0069007400650020006e00610020007600790074007600e100720061006e0069006500200064006f006b0075006d0065006e0074006f0076002000410064006f006200650020005000440046002c0020006b0074006f007200e90020007300610020006e0061006a006c0065007001610069006500200068006f0064006900610020006e00610020007a006f006200720061007a006f00760061006e006900650020006e00610020006f006200720061007a006f0076006b0065002c00200070006f007300690065006c0061006e0069006500200065002d006d00610069006c006f006d002000610020006e006100200049006e007400650072006e00650074002e00200056007900740076006f00720065006e00e900200064006f006b0075006d0065006e007400790020005000440046002000620075006400650020006d006f017e006e00e90020006f00740076006f00720069016500200076002000700072006f006700720061006d006f006300680020004100630072006f00620061007400200061002000410064006f00620065002000520065006100640065007200200035002e0030002000610020006e006f0076016100ed00630068002e> /SLV <FEFF005400650020006e006100730074006100760069007400760065002000750070006f0072006100620069007400650020007a00610020007500730074007600610072006a0061006e006a006500200064006f006b0075006d0065006e0074006f0076002000410064006f006200650020005000440046002c0020006b006900200073006f0020006e0061006a007000720069006d00650072006e0065006a016100690020007a00610020007000720069006b0061007a0020006e00610020007a00610073006c006f006e0075002c00200065002d0070006f01610074006f00200069006e00200069006e007400650072006e00650074002e00200020005500730074007600610072006a0065006e006500200064006f006b0075006d0065006e0074006500200050004400460020006a00650020006d006f0067006f010d00650020006f0064007000720065007400690020007a0020004100630072006f00620061007400200069006e002000410064006f00620065002000520065006100640065007200200035002e003000200069006e0020006e006f00760065006a01610069006d002e> /SUO <FEFF004b00e40079007400e40020006e00e40069007400e4002000610073006500740075006b007300690061002c0020006b0075006e0020006c0075006f00740020006c00e400680069006e006e00e40020006e00e40079007400f60073007400e40020006c0075006b0065006d0069007300650065006e002c0020007300e40068006b00f60070006f0073007400690069006e0020006a006100200049006e007400650072006e0065007400690069006e0020007400610072006b006f006900740065007400740075006a0061002000410064006f0062006500200050004400460020002d0064006f006b0075006d0065006e007400740065006a0061002e0020004c0075006f0064007500740020005000440046002d0064006f006b0075006d0065006e00740069007400200076006f0069006400610061006e0020006100760061007400610020004100630072006f0062006100740069006c006c00610020006a0061002000410064006f00620065002000520065006100640065007200200035002e0030003a006c006c00610020006a006100200075007500640065006d006d0069006c006c0061002e> /SVE <FEFF0041006e007600e4006e00640020006400650020006800e4007200200069006e0073007400e4006c006c006e0069006e006700610072006e00610020006f006d002000640075002000760069006c006c00200073006b006100700061002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400200073006f006d002000e400720020006c00e4006d0070006c0069006700610020006600f6007200200061007400740020007600690073006100730020007000e500200073006b00e40072006d002c0020006900200065002d0070006f007300740020006f006300680020007000e500200049006e007400650072006e00650074002e002000200053006b006100700061006400650020005000440046002d0064006f006b0075006d0065006e00740020006b0061006e002000f600700070006e00610073002000690020004100630072006f0062006100740020006f00630068002000410064006f00620065002000520065006100640065007200200035002e00300020006f00630068002000730065006e006100720065002e> /TUR <FEFF0045006b00720061006e002000fc0073007400fc0020006700f6007200fc006e00fc006d00fc002c00200065002d0070006f00730074006100200076006500200069006e007400650072006e006500740020006900e70069006e00200065006e00200075007900670075006e002000410064006f006200650020005000440046002000620065006c00670065006c0065007200690020006f006c0075015f007400750072006d0061006b0020006900e70069006e00200062007500200061007900610072006c0061007201310020006b0075006c006c0061006e0131006e002e00200020004f006c0075015f0074007500720075006c0061006e0020005000440046002000620065006c00670065006c0065007200690020004100630072006f0062006100740020007600650020004100630072006f006200610074002000520065006100640065007200200035002e003000200076006500200073006f006e0072006100730131006e00640061006b00690020007300fc007200fc006d006c00650072006c00650020006100e70131006c006100620069006c00690072002e> /UKR <FEFF04120438043a043e0440043804410442043e043204430439044204350020044604560020043f043004400430043c043504420440043800200434043b044f0020044104420432043e04400435043d043d044f00200434043e043a0443043c0435043d044204560432002000410064006f006200650020005000440046002c0020044f043a0456043d04300439043a04400430044904350020043f045604340445043e0434044f0442044c00200434043b044f0020043f0435044004350433043b044f043404430020043700200435043a04400430043d044300200442043000200406043d044204350440043d043504420443002e00200020042104420432043e04400435043d045600200434043e043a0443043c0435043d0442043800200050004400460020043c043e0436043d04300020043204560434043a0440043804420438002004430020004100630072006f006200610074002004420430002000410064006f00620065002000520065006100640065007200200035002e0030002004300431043e0020043f04560437043d04560448043e04570020043204350440044104560457002e> /ENU (Use these settings to create Adobe PDF documents best suited for on-screen display, e-mail, and the Internet. Created PDF documents can be opened with Acrobat and Adobe Reader 5.0 and later.) /Namespace [ (Adobe) (Common) (1.0) /OtherNamespaces [ /AsReaderSpreads false /CropImagesToFrames true /ErrorControl /WarnAndContinue /FlattenerIgnoreSpreadOverrides false /IncludeGuidesGrids false /IncludeNonPrinting false /IncludeSlug false /Namespace [ (Adobe) (InDesign) (4.0) /OmitPlacedBitmaps false /OmitPlacedEPS false /OmitPlacedPDF false /SimulateOverprint /Legacy /AddBleedMarks false /AddColorBars false /AddCropMarks false /AddPageInfo false /AddRegMarks false /ConvertColors /ConvertToRGB /DestinationProfileName (sRGB IEC61966-2.1) /DestinationProfileSelector /UseName /Downsample16BitImages true /FlattenerPreset << /PresetSelector /MediumResolution /FormElements false /GenerateStructure false /IncludeBookmarks false /IncludeHyperlinks false /IncludeInteractive false /IncludeLayers false /IncludeProfiles true /MultimediaHandling /UseObjectSettings /Namespace [ (Adobe) (CreativeSuite) (2.0) /PDFXOutputIntentProfileSelector /NA /PreserveEditing false /UntaggedCMYKHandling /UseDocumentProfile /UntaggedRGBHandling /UseDocumentProfile /UseDocumentBleed false>> setdistillerparams /HWResolution [600 600] /PageSize</s>
|
<s>[612.000 792.000]>> setpagedevice</s>
|
<s>untitledPolygot: An Approach Towards Reliable Translation By Name Identification AndMemory Optimization Using Semantic AnalysisMd. Adnanul Islam∗, A. B. M. Alim Al Islam† and Md. Saidul Hoque Anik‡Department of CSE, BUETDhaka 1000, BangladeshEmail: ∗islamadnan2265@gmail.com, †alim razi@cse.buet.ac.bd, ‡onix.hoque@gmail.comAbstract—We present a study of improving the efficiency,complexity and performance of language translation processby a translator. The goal of this research is to develop anefficient translation system for any language by optimizingthe memory consumption and also identifying the names asnouns efficiently. Although a number of researches can befound on natural language processing in different areas, thosewere performed keeping English as the only target languagemostly. However, a good number of languages remain nearlyunexplored in the research fields yet. This study basicallyfocuses on Bengali Language as an example of the unexploredlanguages. Some noticeable studies on Bengali language are onBangla keyboard layout design, English to Bangla translator,etc. so far. However, very few researches have been done totranslate Bengali text to English till now. To develop an efficienttranslation system is very complex and expensive as it requireshuge amount of time and resources. In all the languages,there are many words having multiple meanings and multipleforms and also some sentences having multiple grammaticalstructures to express the same meaning. Besides, the names ofpeople may not be easily identified due to the vast diversity ofnames and also the tags (prefix/ suffix) attached to emphasizenames. Therefore, it remains a great challenge to recognizea sentence of a particular language with accurate semanticanalysis. However it is very important to have a generalisedtranslation system which can compute various possible outputsin reasonable time and space. In this paper we focus on thecorrect interpretation of the names as nouns in a sentenceand also the optimization of space requirement using semanticanalysis.Keywords—NLP, OpenNLP, Wordnet1. IntroductionLanguage learning requires motivation, time, and dedi-cation. One has to read, write, listen, and speak regularly tolearn a new language effectively. Learning a new languageis exciting and beneficial at all ages as it offers practical,intellectual and many aspirational benefits. However, humanbeings generally get to know one language in early child-hood almost subconsciously which is known as the mothertongue. Human beings can sense the meaning or expressionof any sentence in his mother tongue easily by identifyingthe basic grammatical tags of each word in that sentence.For example, let us consider the simple sentence - “I eatrice”. The sentence can automatically be recognized by achild whose mother tongue is English without any formalknowledge on that language. This inherent subconscioushuman nature also teaches a child to detect the semanticerror of the sentence - “I eat Football”. Primarily, perhapsvisual perceptions of different objects and activities e.g.,rice, Football, books, eating, running, etc., assist a childto sense the context and meaning of different words in asentence. The ability of human to sense and learn one lan-guage appropriately enables him to learn another languageby translation. This is where a machine lags behind humanin the field of translation as a machine, which is dictatedonly by logic or proofs, cannot learn any language inherentlyor subconsciously.While progress has been made in language-translationsoftware and allied technologies, the primary language</s>
|
<s>ofthe ubiquitous and all-influential World Wide Web is En-glish. English is typically the language of latest-versionapplications and programs and new freeware, shareware,peer-to-peer, social media networks and websites. Manu-als, installation guides and product fact sheets of variousconsumer electronics and entertainment devices usually aremade available in English first before being made availablein other languages.Bengali, the native language of around 189 millionpeople worldwide, mostly from Bangladesh, is consideredas low-resource language for machine translation as it lacksdifferent language resources like electronic texts and par-allel corpus. Around 38% of Bengali speaking people aremonolingual. Since significance of learning English is un-avoidable at present, it is important to have a well developedBengali to English translation learning system.In this study, we take Bengali to English translation sys-tem as an example to propose such a generalised translationskeleton. The main focus of this work is-• Proper identification of names as nouns by detectionof emphasized tags at the end of the names• Memory optimization by semantic analysis of verbs978-1-5386-3288-8/17/$31.00 c© 2017 IEEE2. MotivationMillions of immigrants travel the world from non-English-speaking countries every year. For obvious reasons,learning to communicate in English for the immigrants isvery important to enter and also succeed in mainstreamEnglish speaking countries. Working knowledge of Englishlanguage enhances many opportunities in international mar-kets. However, a major group of people lacks proficiency inEnglish. Also, there is no well developed translation systemtill now to translate many native languages to English.Therefore, importance of a generalised translation skeletonis noteworthy.There are different existing systems for automatic trans-lation process. Machine translation is the most popularamong them. European Commission have been using theMachine Translator to convert text from one language toanother language since 1976. This broad usage spreads itsnecessity widely with its developed translation technique forregular uses.Now-a-days, Google translator is one of the pioneerapplications supporting a number of languages to translatefrom one to another. Although it works successfully formany languages, it merely can translate Bengali to English.Bing translator, another popular translator, does not evenrecognize Bengali. The other translators e.g., Yahoo BabelFish, Systran Language Translation, SDL Free Translation,etc., support multilingual translation like Danish, English,Chinese, Italian, Japanese, French, Greek, Korean, etc., how-ever, not Bengali and many other widely used languages.Natural languages like English, Spanish, and even Hindiare rapidly progressing in processing by computers. How-ever, Bengali, being among the top ten languages in theworld, is yet in quite delinquent stage in the area of com-putational linguistics and machine translation. Bengali lagsbehind in some crucial areas of research like parts of speechtagging, information retrieval from texts, text categorization,and most importantly in the area of syntax and semanticchecking [1].Therefore, our motive of this study is not only to effi-ciently translate one language to other by proper semanticanalysis but also to teach the translated language by explain-ing the translation mechanism step by step.3. Related WorkBengali is one of the most widely spoken languagesthroughout the world with nearly 230 million total speakers.However, Bengali still lacks significant research in the areaof natural language processing unfortunately. Bangla to En-glish translation was first proposed by Sk. Borhan Uddin,Dr. Md. Fokhray Hossain and Kamanashis Biswas usingopennlp tool. They</s>
|
<s>proposed a simple technique for synthe-sizing Bengali words. However, they used opennlp tool aftertranslating the Bengali word to corresponding English wordwhich caused erroneous Parts Of Speech (POS) tagging fordifferent words and generated wrong outputs for very simplesentences.Dasgupta et al., [6] proposed to use syntactic transfer.They converted CNF trees to normal parse trees and using abilingual dictionary, generated output translation. However,this research did not consider translating the unknown wordswhich did not appear in the bilingual dictionary.Chunk parsing was first proposed by Abney (1991). Al-though EBMT (Example based Machine Translation) usingchunks as the translation unit is not new, it has not beenwidely explored for low-resource language like Bengali yet.Naskar et al., [13] reported a phrasal EBMT for translatingEnglish to Bengali without any evaluation of their EBMT.Besides, they did not clearly explain their translation mech-anism, specially the word reordering process.Saha et al., [14] reported an EBMT for the translation ofdiffernet news headlines. The work showed that EBMT canbe a positive approach for Bengali language. However, theirapproach relied mostly on news headlines. Moreover, Gan-gadharaiah et al., [3] proposed that templates can be usefulfor EBMT to obtain longer phrasal matches if coordinatedwith statistical decoders. His study showed that it is a timeconsuming task to cluster the words manually and wouldbe less time consuming to use standard available resourcessuch as, WordNet for clustering.Kim et al., [4] used syntactic chunks as units of trans-lation for improving insertion or deletion of words betweentwo distant languages. However, an example base withaligned chunks in both source and target language is missingin this approach.4. Proposed MechanismOur previous work was on going beyond database drivenand syntax based translation [1]. The work basically focusedon the translation of simple sentences. Simple sentenceanalysis and recognition is the preliminary step which leadsto the advancement towards the translation of improvedor more complex sentences. However, analyzing and rec-ognizing a simple sentence of a language correctly, notonly syntactically but also semantically, requires enormousexploration and exploitation on that particular language. Forexamples,• “The complex houses married and single soldiersand their families.”This is called a garden path sentence. Though gram-matically correct, the reader’s initial interpretation ofthe sentence may be nonsensical.Here, complex may be interpreted as an adjective andhouses may be interpreted as a noun. Readers areimmediately confused upon reading that the complexhouses married, interpreting married as the verb.How can houses get married? In actuality, complexis the noun, houses is the verb, and married is theadjective. The sentence is trying to express the fol-lowing: Single soldiers, as well as married soldiersand their families, reside in the complex.• “All the faith he had had had had no effect on theoutcome of his life.”This sentence is an example of lexical ambiguity.Although this sentence might sound strange, it is ac-tually grammatically correct. The sentence relies ona double use of the past perfect. The two instancesof had had play different grammatical roles in thesentences the first is a modifier while the second isthe main verb of the sentence.From the above examples, we can see how a simple sentencecan become so complex to deal with. Besides, there aremany</s>
|
<s>phrasal sentences which indirectly indicate differentspecial semantic meanings.One of the most challenging tasks is identification ofnames properly as nouns since the names cannot be includedin vocabulary may contain some emphasized tags whichneed to be separated from the name accurately.4.1. MethodologyBasically, translation of a sentence consists of six majorsteps as shown in figure 1.• Input Bengali text.• Analyze the input sentence by tokenizing.• Tagging ( parts of speech, number, person ) of thetokens.• Word by word translation of the tokens.• Apply necessary suffixes and words to the verb.• Rearrange the words applying grammatical rules tooutput the translated sentence.Figure 1. Proposed translation skeletonWe will mainly focus on tagging the tokens by identify-ing the names as nouns efficiently. Also, we will optimizethe database required for tagging the verbs having differentforms. Therefore, we will improve the subsequent steps ofthe translation methodology by efficient tagging mechanismproposed in this paper.4.2. Name IdentificationOne major and unique improvement by our system isname identification and name translation. Names need tobe identified as nouns correctly first to tokenize the inputsentence and analyze the tokens properly. Names of peopleare generally considered as nouns in the sentences whichdictate person, number and gender of the subject. Therefore,properly identifying the names as nouns is very crucial foraccurate translation. Google translator completely misunder-stands the names of the people in Bengali. Therefore, itfails to identify the number and the person of the subjectwhich ultimately leads to failure for even translation of basicsentences. Names cannot be translated by using databasecontaining the vocabulary. However, our system can recog-nize the names by applying its specific grammatical rule setto identify the subject.After we have detected the names as subjects, we canassign the subject with the tag - third person, singularnumber. Then our system can modify the verbs accordingly.However, we do not have any translation for names in thevocabulary. Thus we have developed a Bengali to Englishphonetic mapping conversion algorithm in the proposedsystem which enables translation of the names from Bengalito English. We show a sample procedure of name processingfor Bengali to English translation in figure 2.Figure 2. Name translation4.3. Emphasized Name IdentificationThere are also names with different emphasizing tagsused in many languages. The emphasizing tags, associatedwith the names, are actually not any part of the originalname. They mean to emphasize the names as adjectives.They have separate meaning and use in the sentence. If theseare not identified correctly and separated from the names,the resulting translation may become faulty as shown in thefigure 3. In the figure, the system misinterprets the subjectas a name (Amio) due to the omission of suffix checking.The translated sentence should have been - “I will also playFootball”.In Bengali names, we can always perform a check in thesuffix of the subject and separate the emphasizing tags asFigure 3. Emphasized namesuffixes from the actual name. After that, we can translatethe name as earlier and take care of the emphasizing tags(suffixes) according to grammatical rule set as shown infigure 4.Figure 4. Emphasized name identification methodologyAlthough we can apply this mechanism for name iden-tification appropriately, this may not generate the accurateresult</s>
|
<s>for all the cases. We will come back to this pointlater with relevant examples. However, considering probablefaulty translations for some exceptional cases, we can applythe proposed mechanism for emphasized name identificationeffectively.4.4. Memory OptimizationThis is another important feature of this work. In Ben-gali, same verbs may occur in multiple forms depending onthe number and person of the subject and the tense of theverb. If we need to store the word translation of each form ofthe same verb then the database will become very large dueto the repeated verbs contributing to massive consumptionof memory.However, we can avoid the multiple insertions of thesame verb having different forms in our proposed databaseoptimization technique. We can only store the main formatof a verb and apply semantic analysis to detect the mainformat from other formats of the verb depending on number,person and tense as shown in figure 5. The figure showshow one word (verb) can take different forms depending ontenses and suggests insertion of only that particular word(verb) instead of inserting all of its different forms. Thiswill avoid multiple insertions in the database for the sameverb with multiple forms.More quantitatively, every ASCII character consumesone byte (8 bits) of memory and every Unicode characterconsumes more than one byte space. Every word we insert inFigure 5. Verbs mappingthe database for vocabulary contains multiple letters whichare either ASCII or Unicode characters. Inserting ten differ-ent forms of the same word can be compared to inserting tenunnecessary words consuming more than ten times requiredspace as different forms of one verb can be even biggerwords containing arrays of Unicode or ASCII characters.Therefore, the memory (space) consumption should improvesignificantly by carefully avoiding these redundant insertionsin database as proposed in our memory optimization tech-nique.5. Experimental EvaluationThis section reflects the evaluation and outcomes of theexperimental results according to our proposed methodol-ogy.5.1. Tools and SettingsExperimental set ups cost significant amount of time andenergy. This was the most critical period of our research.For experimentation, we used the following features in ourimplemented system:• Language : JAVA• Platform/ IDE : Netbeans• Database : Sqlite• Tool : Opennlp toolsA major issue arose while taking input and parsingBengali texts in java. We set up text encoding to UTF-8,however Bengali texts failed to appear in netbeans. Aftera lot of research, we had to change the font settings andsome other settings to work with Bengali texts successfullyin netbeans.We used Sqlite with java for database in our system.Since we had to use a Bengali to English dictionary, weneeded a database to retrieve the word translations. To dothis we installed Sqlite and aso added a jar file for Sqlite inour project.Regarding Bengali to English dictionary, we could notfind any well defined dictionary format or API so that wecould integrate it to our database directly in a time efficientway. Since inserting the words by brute force is a huge timeconsuming issue, we had to do a huge amount of exhaustivework to insert a reasonable amount of words in our database.5.2. ResultsWe dealt with various types of sentences. However, oursystem works perfectly with basic simple sentences andcomplex</s>
|
<s>sentences. Particularly in this work, we tested oursystem with different emphasized names and verbs withdifferent tenses. A scenario of our experimental results isshown as follows:Figure 6. Example on emphasized name identificationFigure 7. Emphasized name identification resultsFigure 6 and 7 reflect the results of our emphasizedname identification methodology. The figures show how theemphasizing tags, ’also’ and ’only’, have been identifiedFigure 8. Database optimization technique (Present Continuous)Figure 9. Database optimization technique (Present Perfect)and separated from the subjects, ’you’ and ’Rahim’, whichultimately leads to the construction of correct form of verbsby recognizing the number and the person of the subjectsaccurately.In the figures 8 and 9, we show the database optimizationtechnique using mapping of different forms of verbs to onemain verb. The figures illustrate the mapping of differentforms of the verb, ’eat’, in present continuous tense andpresent perfect tense. Also, different forms of ’eat’, occur-ring in Bengali, due to the subjects having different personsand numbers have been mapped to ’eat’ here by appropriatesemantic analysis.5.3. FindingsIn this section we show a comparison of our proposedsystem with Google translator. Google translator completelyfails to identify both the names (in figure 10)and the em-phasized tags from the names (in figure 11).Figure 10. Comparison of Google Translator with our proposed system forname identificationFigure 11. Faulty translation with Google TranslatorAlso, Google translator does not follow any optimizationtechnique for identifying or predicting verbs from differentforms of verbs.We have already shown the improvement achieved byour proposed system in figure 6 and 8. Figure 6 reflects theidentification of emphasizing tags in subjects and figure 8shows the database optimization technique for present con-tinuous tense. However, one important observation regardingemphasized name identification is that the emphasized tagitself may also be the part of a valid name.Figure 12. Desired accurate translationFigure 13. Ambiguous translation by our proposed systemOur system would separate that tag from the namewhich can lead to faulty name identification in such cases.However, the improvement achieved for most of the cases ismuch more significant than these very rare cases. Therefore,we need to tolerate the ambiguous name identification re-sulting to the faulty or unexpected translation here as a trade-off as shown in figure 12 which shows the desired correcttranslation and figure 13 which shows the faulty translationby the proposed system in that particular case.6. Future Works• One of the main challenges in Bengali to Englishtext conversion remains in implementing its vastgrammatical rules. If we can track the core rulesto acquire a generalized format for all rules andexceptions then the translation task will be simplerand compact.• There is a great deal of research opportunities inlanguage processing. Grammars keep changing asthe language builds its grammar. Therefore, we needto find a translation process to update new sentencerules anytime. Machine Learning using StatisticalMachine Translation can be one way to achieve it.• Statistical language model can improve the transla-tion quality. We plan to experiment on this model infuture.• Efficient AI techniques, indexing and searchingmechanisms should improve the total system thatmight result in better output.• Another idea is to extend the preposition handlingcomponent by adding more post positional wordsand inflectional suffixes.•</s>
|
<s>As wordnet does not have all sufficient informationyet, preparing a Bengali wordnet can be a progres-sive approach.• Developing Opennlp tools of Bengali sentences forparts of speech tagging of Bengali words efficientlyis one of the most crucial tasks in Bengali to Englishtranslation. At present, Opennlp tools can recognizeand process English sentences successfully.• We also plan to make a machine translation systemso that user can train it using AI techniques.• Initially, our aim was to build a translation modelfor Bengali to Arabic conversion. However due tolack of proficiency in Arabic language, we had tostart with conversion to English language. Therefore,we want to implement our proposed generalisedskeleton of language processing for Arabic languagesoon so that we can help a large group of people tolearn and understand Arabic language.7. ConclusionNatural language processing tasks are always complexand challenging due to a number of critical issues. Eventhe most sophisticated software cannot substitute the skillof a professional translator. There are so many reasonswhy machine translations are not as satisfactory as humantranslations. Ambiguity in translation mainly occurs due toone word having different meanings depending on the con-text. Also, there may be human emotions and expressionsassociated with a sentence causing ambiguity in expectedmeaning of that sentence. Disambiguation requires use ofeither shallow approach that uses statistical techniques toremove ambiguity or the more intelligent approach thatinvolves comprehensive knowledge of a word. The formerstill leaves plenty scope for translation error while the latteris almost impractical to implement.One of the reasons that translator cannot replace pro-fessional human translation is the same reason that plainold bilingual laypeople, for many tasks, cannot replaceprofessional human translation. Most of the translation tasksrequire more than just knowledge of two languages. The ideathat one can simply create one-to-one equivalencies acrosslanguages is wrong. Translators are not walking dictionaries.They recreate language. They craft beautiful phrases andsentences to make them have the same impact as the source.Often, they devise brand-new ways of saying things, andto do so, they draw upon a lifetime’s worth of knowledgederived from living in two cultures. Machines or machinetranslators cannot exactly do that.Almost in every language, the normal rules of grammaralways consist of a number of exceptions. And keepingtrack of all those situations is a difficult task even for theintelligent beings. It massively demands wide and complexapplication of Artificial Intelligence to build up a near-accurate translator which on the contrary, may result indegradation of the overall performance of the system dras-tically. Hence, the efficiency in translating languages withcomplex grammatical rules is not too high. In our proposedsystem, we found that for simple sentences the system caneasily respond with correct answer (or, may be just with ananswer) immediately.Our system currently focuses on only Bengali to Englishtranslation. However, it has limited knowledge base andvocabulary. By increasing the vocabulary and the knowledgebase we can improve its efficiency by testing over widerange of different cases for general purpose use.This is a very creative research topic. People generallythrive not only for translations between two languages butalso for learning multiple languages equal effectively. Thereare many existing language learning websites at present. Asthey already have</s>
|
<s>implemented some important conceptsand features, exploiting and exploring them more, and evenmay be integrating them would be a useful task. Besides,many other creative features still remain to be thoughtof for making the learning easier, faster and importantly,interesting. Considering the limitations of a translator, ourpreference is always towards making the learning of alanguage easier by implementing all the basic translationprocedures which is our proposed translation system allabout.References[1] M. Islam and A. Islam, Polygot: Going Beyond Database Driven AndSyntax-based Translation, ACM DEV ’16: Proceedings of the 7thAnnual Symposium on Computing for Development, November 2016.[2] Z. Anwar, Developing a Bangla to English Machine Translation SystemUsing Parts Of Speech Tagging: A Review, Vol. 1. No. 1, Journal ofModern Science and Technology, May 2013.[3] R. Gangadharaiah, R. D. Brown, and J. G. Carbonell., Phrasal equiv-alence classes for generalized corpusbased machine translation. InAlexander Gelbukh, editor, Computational Linguistics and IntelligentText Processing, volume 6609 of Lecture Notes inComputer Science,pages 1328. Springer Berlin / Heidelberg, 2011.[4] S. Raphael, J. D. Kim, R. D. Brown, J. G. Carbonell, Chunk-BasedEBMT. EAMT, 2010.[5] M. Roy, A Semi-supervised Approach to Bengali-English Phrase-Based Statistical Machine Translation, Proceedings of the 22nd Cana-dian Conference on Artificial Intelligence, 2009.[6] S. Dasgupta, A. Wasif, and S. Azam, An Optimal Way TowardsMachine Translation from English to Bengali, Proceedings of the 7thInternational Conference on Computer and Information Technology(ICCIT), 2004.[7] M. Anwar and M. Bhuiyan, Syntax Analysis and Machine Translationof Bangla Sentences, International Journal of Computer Science andNetwork Security, 09(08),317326; 2009.[8] Optimal Way Towards Machine Translation from English to Bengali,In the Proceedings of the 7th International Conference on Computerand Information Technology (ICCIT), Bangladesh, 2004.[9] Improving Example Based English to Bengali Machine Translationusing WordNet; 2009.[10] Bangla to English Text Conversion using opennlp Tools; DaffodilInternational University Journal Of Science & Technology, Vol. 8, Issue1, JANUARY 2013 .[11] G. Doddington, Automatic Evaluation of Machine Translation QualityUsing N-gram CoOccurrence Statistics, Proceedings of the secondinternational conference on Human Language Technology Research,2002.[12] M. Snover, B. Dorr, R. Schwartz, L. Micciulla, and J. Makhou,A Study of Translation Edit Rate with Targeted Human Annotation,Proceedings of Association for Machine Translation in the Americas,2006.[13] S.p K. Naskar and S. Bandyopadhyay, A Phrasal EBMT Systemfor Translating English to Bengali, Proceedings of the Workshop onLanguage, Artificial Intelligence, and Computer Science for NaturalLanguage Processing Applications (LAICSNLP), 2006.[14] D. Saha, S. K. Naskar, S. Bandyopadhyay, A Semantics-basedEnglish-Bengali EBMT System for translating News Headlines, MTXummit, 2005.[15] N. Karamat, Verb Transfer For English To Urdu Machine Translation,FAST-Lahore, 2006[16] N. Chatterjee, S. Goyal, A. Naithani, Resolving Pattern Ambiguityfor English to Hindi Machine Translation Using WordNet, Departmentof Mathematics, Indian Institute of Technology Delhi, Published inWorkshop Modern Approaches in Translation Technologies, Borovets,Bulgaria, 2005.[17] Example Based English to Bengali Machine Translation Thesis workof Khan Md. Anwarus Salam completed in August 2009.[18] J. Tiedemann and L. Nygard, The OPUS corpus - parallel and free,Proceedings of LREC, 2004.[19] OpenNLP, www.maxnet.sourceforge.net, accessed on July 13,2017 &Google Translator.[20] D. Melamed, A Geometric Approach to Mapping Bitext Correspon-dence, Proceedings of the First Conference on Empirical Methods inNatural Language Processing (EMNLP), 1996. /ASCII85EncodePages false /AllowTransparency false /AutoPositionEPSFiles</s>
|
<s>false /AutoRotatePages /None /Binding /Left /CalGrayProfile (Gray Gamma 2.2) /CalRGBProfile (sRGB IEC61966-2.1) /CalCMYKProfile (U.S. Web Coated \050SWOP\051 v2) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Warning /CompatibilityLevel 1.4 /CompressObjects /Off /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages true /CreateJobTicket false /DefaultRenderingIntent /Default /DetectBlends true /DetectCurves 0.0000 /ColorConversionStrategy /LeaveColorUnchanged /DoThumbnails false /EmbedAllFonts true /EmbedOpenType false /ParseICCProfilesInComments true /EmbedJobOptions true /DSCReportingLevel 0 /EmitDSCWarnings false /EndPage -1 /ImageMemory 1048576 /LockDistillerParams true /MaxSubsetPct 100 /Optimize false /OPM 0 /ParseDSCComments false /ParseDSCCommentsForDocInfo false /PreserveCopyPage true /PreserveDICMYKValues true /PreserveEPSInfo false /PreserveFlatness true /PreserveHalftoneInfo true /PreserveOPIComments false /PreserveOverprintSettings true /StartPage 1 /SubsetFonts false /TransferFunctionInfo /Remove /UCRandBGInfo /Preserve /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true /Arial-Black /Arial-BoldItalicMT /Arial-BoldMT /Arial-ItalicMT /ArialMT /ArialNarrow /ArialNarrow-Bold /ArialNarrow-BoldItalic /ArialNarrow-Italic /ArialUnicodeMS /BookAntiqua /BookAntiqua-Bold /BookAntiqua-BoldItalic /BookAntiqua-Italic /BookmanOldStyle /BookmanOldStyle-Bold /BookmanOldStyle-BoldItalic /BookmanOldStyle-Italic /BookshelfSymbolSeven /Century /CenturyGothic /CenturyGothic-Bold /CenturyGothic-BoldItalic /CenturyGothic-Italic /CenturySchoolbook /CenturySchoolbook-Bold /CenturySchoolbook-BoldItalic /CenturySchoolbook-Italic /ComicSansMS /ComicSansMS-Bold /CourierNewPS-BoldItalicMT /CourierNewPS-BoldMT /CourierNewPS-ItalicMT /CourierNewPSMT /EstrangeloEdessa /FranklinGothic-Medium /FranklinGothic-MediumItalic /Garamond /Garamond-Bold /Garamond-Italic /Gautami /Georgia /Georgia-Bold /Georgia-BoldItalic /Georgia-Italic /Haettenschweiler /Impact /Kartika /Latha /LetterGothicMT /LetterGothicMT-Bold /LetterGothicMT-BoldOblique /LetterGothicMT-Oblique /LucidaConsole /LucidaSans /LucidaSans-Demi /LucidaSans-DemiItalic /LucidaSans-Italic /LucidaSansUnicode /Mangal-Regular /MicrosoftSansSerif /MonotypeCorsiva /MSReferenceSansSerif /MSReferenceSpecialty /MVBoli /PalatinoLinotype-Bold /PalatinoLinotype-BoldItalic /PalatinoLinotype-Italic /PalatinoLinotype-Roman /Raavi /Shruti /Sylfaen /SymbolMT /Tahoma /Tahoma-Bold /TimesNewRomanMT-ExtraBold /TimesNewRomanPS-BoldItalicMT /TimesNewRomanPS-BoldMT /TimesNewRomanPS-ItalicMT /TimesNewRomanPSMT /Trebuchet-BoldItalic /TrebuchetMS /TrebuchetMS-Bold /TrebuchetMS-Italic /Tunga-Regular /Verdana /Verdana-Bold /Verdana-BoldItalic /Verdana-Italic /Vrinda /Webdings /Wingdings2 /Wingdings3 /Wingdings-Regular /ZWAdobeF /NeverEmbed [ true /AntiAliasColorImages false /CropColorImages true /ColorImageMinResolution 200 /ColorImageMinResolutionPolicy /OK /DownsampleColorImages true /ColorImageDownsampleType /Bicubic /ColorImageResolution 300 /ColorImageDepth -1 /ColorImageMinDownsampleDepth 1 /ColorImageDownsampleThreshold 1.50000 /EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages false /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /ColorImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 200 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 1.50000 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages false /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /GrayImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 400 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 600 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.50000 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1 /AllowPSXObjects false /CheckCompliance [ /None /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXOutputIntentProfile (None) /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False /CreateJDFFile false /Description << /CHS <FEFF4f7f75288fd94e9b8bbe5b9a521b5efa7684002000410064006f006200650020005000440046002065876863900275284e8e55464e1a65876863768467e5770b548c62535370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c676562535f00521b5efa768400200050004400460020658768633002> /CHT <FEFF4f7f752890194e9b8a2d7f6e5efa7acb7684002000410064006f006200650020005000440046002065874ef69069752865bc666e901a554652d965874ef6768467e5770b548c52175370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c4f86958b555f5df25efa7acb76840020005000440046002065874ef63002> /DAN <FEFF004200720075006700200069006e0064007300740069006c006c0069006e006700650072006e0065002000740069006c0020006100740020006f007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400650072002c0020006400650072002000650067006e006500720020007300690067002000740069006c00200064006500740061006c006a006500720065007400200073006b00e60072006d007600690073006e0069006e00670020006f00670020007500640073006b007200690076006e0069006e006700200061006600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020004400650020006f007000720065007400740065006400650020005000440046002d0064006f006b0075006d0065006e0074006500720020006b0061006e002000e50062006e00650073002000690020004100630072006f00620061007400200065006c006c006500720020004100630072006f006200610074002000520065006100640065007200200035002e00300020006f00670020006e0079006500720065002e> /DEU <FEFF00560065007200770065006e00640065006e0020005300690065002000640069006500730065002000450069006e007300740065006c006c0075006e00670065006e0020007a0075006d002000450072007300740065006c006c0065006e00200076006f006e002000410064006f006200650020005000440046002d0044006f006b0075006d0065006e00740065006e002c00200075006d002000650069006e00650020007a0075007600650072006c00e40073007300690067006500200041006e007a006500690067006500200075006e00640020004100750073006700610062006500200076006f006e00200047006500730063006800e40066007400730064006f006b0075006d0065006e00740065006e0020007a0075002000650072007a00690065006c0065006e002e00200044006900650020005000440046002d0044006f006b0075006d0065006e007400650020006b00f6006e006e0065006e0020006d006900740020004100630072006f00620061007400200075006e0064002000520065006100640065007200200035002e003000200075006e00640020006800f600680065007200200067006500f600660066006e00650074002000770065007200640065006e002e> /ESP <FEFF005500740069006c0069006300650020006500730074006100200063006f006e0066006900670075007200610063006900f3006e0020007000610072006100200063007200650061007200200064006f00630075006d0065006e0074006f0073002000640065002000410064006f00620065002000500044004600200061006400650063007500610064006f007300200070006100720061002000760069007300750061006c0069007a00610063006900f3006e0020006500200069006d0070007200650073006900f3006e00200064006500200063006f006e006600690061006e007a006100200064006500200064006f00630075006d0065006e0074006f007300200063006f006d00650072006300690061006c00650073002e002000530065002000700075006500640065006e00200061006200720069007200200064006f00630075006d0065006e0074006f00730020005000440046002000630072006500610064006f007300200063006f006e0020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e003000200079002000760065007200730069006f006e0065007300200070006f00730074006500720069006f007200650073002e> /FRA <FEFF005500740069006c006900730065007a00200063006500730020006f007000740069006f006e00730020006100660069006e00200064006500200063007200e900650072002000640065007300200064006f00630075006d0065006e00740073002000410064006f006200650020005000440046002000700072006f00660065007300730069006f006e006e0065006c007300200066006900610062006c0065007300200070006f007500720020006c0061002000760069007300750061006c00690073006100740069006f006e0020006500740020006c00270069006d007000720065007300730069006f006e002e0020004c0065007300200064006f00630075006d0065006e00740073002000500044004600200063007200e900e90073002000700065007500760065006e0074002000ea0074007200650020006f007500760065007200740073002000640061006e00730020004100630072006f006200610074002c002000610069006e00730069002000710075002700410064006f00620065002000520065006100640065007200200035002e0030002000650074002000760065007200730069006f006e007300200075006c007400e90072006900650075007200650073002e> /ITA (Utilizzare queste impostazioni per creare documenti Adobe PDF adatti per visualizzare e stampare documenti aziendali in modo affidabile. I documenti PDF creati possono essere aperti con Acrobat e Adobe Reader 5.0 e versioni successive.) /JPN <FEFF30d330b830cd30b9658766f8306e8868793a304a3088307353705237306b90693057305f002000410064006f0062006500200050004400460020658766f8306e4f5c6210306b4f7f75283057307e305930023053306e8a2d5b9a30674f5c62103055308c305f0020005000440046002030d530a130a430eb306f3001004100630072006f0062006100740020304a30883073002000410064006f00620065002000520065006100640065007200200035002e003000204ee5964d3067958b304f30533068304c3067304d307e305930023053306e8a2d5b9a3067306f30d530a930f330c8306e57cb30818fbc307f3092884c3044307e30593002> /KOR <FEFFc7740020c124c815c7440020c0acc6a9d558c5ec0020be44c988b2c8c2a40020bb38c11cb97c0020c548c815c801c73cb85c0020bcf4ace00020c778c1c4d558b2940020b3700020ac00c7a50020c801d569d55c002000410064006f0062006500200050004400460020bb38c11cb97c0020c791c131d569b2c8b2e4002e0020c774b807ac8c0020c791c131b41c00200050004400460020bb38c11cb2940020004100630072006f0062006100740020bc0f002000410064006f00620065002000520065006100640065007200200035002e00300020c774c0c1c5d0c11c0020c5f40020c2180020c788c2b5b2c8b2e4002e> /NLD (Gebruik deze instellingen om Adobe PDF-documenten te maken waarmee zakelijke documenten betrouwbaar kunnen worden weergegeven en afgedrukt. De gemaakte PDF-documenten kunnen worden geopend met Acrobat en Adobe Reader 5.0 en hoger.) /NOR <FEFF004200720075006b00200064006900730073006500200069006e006e007300740069006c006c0069006e00670065006e0065002000740069006c002000e50020006f0070007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e00740065007200200073006f006d002000650072002000650067006e0065007400200066006f00720020007000e5006c006900740065006c006900670020007600690073006e0069006e00670020006f00670020007500740073006b007200690066007400200061007600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020005000440046002d0064006f006b0075006d0065006e00740065006e00650020006b0061006e002000e50070006e00650073002000690020004100630072006f00620061007400200065006c006c00650072002000410064006f00620065002000520065006100640065007200200035002e003000200065006c006c00650072002e> /PTB <FEFF005500740069006c0069007a006500200065007300730061007300200063006f006e00660069006700750072006100e700f50065007300200064006500200066006f0072006d00610020006100200063007200690061007200200064006f00630075006d0065006e0074006f0073002000410064006f00620065002000500044004600200061006400650071007500610064006f00730020007000610072006100200061002000760069007300750061006c0069007a006100e700e3006f002000650020006100200069006d0070007200650073007300e3006f00200063006f006e0066006900e1007600650069007300200064006500200064006f00630075006d0065006e0074006f007300200063006f006d0065007200630069006100690073002e0020004f007300200064006f00630075006d0065006e0074006f00730020005000440046002000630072006900610064006f007300200070006f00640065006d0020007300650072002000610062006500720074006f007300200063006f006d0020006f0020004100630072006f006200610074002000650020006f002000410064006f00620065002000520065006100640065007200200035002e0030002000650020007600650072007300f50065007300200070006f00730074006500720069006f007200650073002e> /SUO <FEFF004b00e40079007400e40020006e00e40069007400e4002000610073006500740075006b007300690061002c0020006b0075006e0020006c0075006f0074002000410064006f0062006500200050004400460020002d0064006f006b0075006d0065006e007400740065006a0061002c0020006a006f0074006b006100200073006f0070006900760061007400200079007200690074007900730061007300690061006b00690072006a006f006a0065006e0020006c0075006f00740065007400740061007600610061006e0020006e00e400790074007400e4006d0069007300650065006e0020006a0061002000740075006c006f007300740061006d0069007300650065006e002e0020004c0075006f0064007500740020005000440046002d0064006f006b0075006d0065006e00740069007400200076006f0069006400610061006e0020006100760061007400610020004100630072006f0062006100740069006c006c00610020006a0061002000410064006f00620065002000520065006100640065007200200035002e0030003a006c006c00610020006a006100200075007500640065006d006d0069006c006c0061002e></s>
|
<s>/SVE <FEFF0041006e007600e4006e00640020006400650020006800e4007200200069006e0073007400e4006c006c006e0069006e006700610072006e00610020006f006d002000640075002000760069006c006c00200073006b006100700061002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400200073006f006d00200070006100730073006100720020006600f60072002000740069006c006c006600f60072006c00690074006c006900670020007600690073006e0069006e00670020006f006300680020007500740073006b007200690066007400650072002000610076002000610066006600e4007200730064006f006b0075006d0065006e0074002e002000200053006b006100700061006400650020005000440046002d0064006f006b0075006d0065006e00740020006b0061006e002000f600700070006e00610073002000690020004100630072006f0062006100740020006f00630068002000410064006f00620065002000520065006100640065007200200035002e00300020006f00630068002000730065006e006100720065002e> /ENU (Use these settings to create PDFs that match the "Required" settings for PDF Specification 4.01)>> setdistillerparams /HWResolution [600 600] /PageSize [612.000 792.000]>> setpagedevice</s>
|
<s>Measuring Semantic Similarity for Bengali Tweets Using WordNetProceedings of Recent Advances in Natural Language Processing, pages 537–544,Hissar, Bulgaria, Sep 7–9 2015.Measuring Semantic Similarity for Bengali Tweets Using WordNet Dwijen Rudrapal CSE Department NIT Agartala, India dwijen.rudrapal@gmail.com Amitava Das CSE Department IIIT, Sricity, India amitava.santu@gmail.com Baby Bhattacharya CSE Department NIT Agartala, India babybhatt75@gmail.com Abstract Similarity between natural language texts, sentences in terms of meaning, known as tex-tual entailment, is a generic problem in the area of computational linguistics. In the last few years researchers worked on various as-pects of textual entailment problem, but mostly restricted to English language. Here in this paper we present a method for measuring the semantic similarity of Bengali tweets us-ing WordNet. Moreover we defined partial textual entailment (PTE) as in real data par-tial entailment cases are equally prevalent with the complete/direct entailment. Alt-hough by definition entailment is a direction-al relationship, but here we consider entail-ment more as semantic similarity. Keywords: Semantic similarity; WordNet; Synonym; 1 Introduction Variations of natural language expression make it difficult to determine semantically equivalent sentences. The beauty of natural languages is similar meaning could be expressed in countless ways; therefore it is a very complex task to measure relatedness of natural language sentenc-es. Morpho-Syntactic variations of similar mean-ing expressions are more prevalent in social me-dia text due to its informal nature. Semantic similarity score plays important role in many Natural Language Applications (NLP) such as multi-document summarization (MDS), question answering(QA), information extraction(IE) (Bhagwani et al., 2012). Several researchers have explored numbers of semantic similarity methods mostly for English but very less for Indian lan-guages and almost nothing for Bengali. Techni-cally these methods can be categorized into two groups: dictionary/thesaurus-based (one such example is edge counting-based) methods and corpus-based (one such method is information theory-based) methods (Li et al., 2003). Edge counting based methods use only semantic links and corpus based methods combine corpus statis-tics with taxonomic distances. The objective of this work is to design a sys-tem to measure semantic similarity score be-tween two Bengali tweets. We adopted a lexical based method; the words are grouped into clus-ters in terms of their senses along with their syn-onyms. Our proposed method centered on ana-lyzing shared words similarity among tweets. Partial Textual Entailment (PTE) is defined as a bidirectional relationship among a sen-tence/tweet pair. It defines partial/complete meaning inference from one sentence/text from another text. We define these following 4 de-tailed PTE categories: 1. Type 1: If both the given texts are having same information and mean same, then it is a case of direct entailment and should be noted as (X=X). 2. Type 2: If the first/second given text has any extra information than the second/first text respectively then it is been categorized as PTE2. This type may have two variations like: (X=X+Z or X+Z=X). 3. Type 3: If the first given text has all the in-formation of the second given text and has some extra information, then its 3 variation of PTE, noted as (X+Z=X+Y). 4. Type 4: If both the given</s>
|
<s>texts are not hav-ing common information then it is a NOT-Entailed case. In all the above cases X, Y, Z represents a block of information in a given text. The remainder of this paper is organized as follows. Section 2 describes corpus acquisition and annotation process, followed by section 3 introduced WordNet structure and the pre-processing step. Section 4 details experiment and evaluation setup. In the section 5 we reported performance of the baseline system. Section 6 is a discussions section on errors in results. Section 7 reviews related work and finally the section 8 concludes the paper. 5372 Corpus Acquisition and Annotation 2.1 Corpus To create Bengali tweet corpus for the proposed entailment problem we targeted tweets on specif-ic contemporary popular topics. The rationale behind topic based tweets collection is to capture people‘s natural way of explaining an event us-ing different synonymous words and varied syn-tactic formations while expressing the same meaning. A paid Twitter API has been used for this purpose. Total 6500 Bengali tweets have been collected for the period of 2 months (Au-gust 2014-September 2014) on 25 different top-ics covering various domains like international and national politics, sports, natural disasters, political campaigns and elections. For example Jamayet Strike issue in Bangladesh, Cheat fund scam in Orissa and Bengal, Flood in Kashmir, Ukraine crisis, Knight Riders performance in IPL, Bi-election in West Bengal etc. In few topics tweets were surprisingly higher, more than 2000, in some topics number of tweets were less or around 100. 2.2 Annotation and Corpus Statistics For the manual annotation of semantic similarity among tweets, we involved two human annota-tors, who are native Bengali speakers but not linguist. An automatic cosine similarity method applied to same topic cluster to prune tweet pairs for the annotation from the corpus. An experi-mentally chosen threshold then set to create an-notation pairs. Finally tweet pairs are being man-ually marked according to the PTE types. Anno-tation agreement has been measured on a small subset, randomly chosen on one topic: having 100 sentence pairs. We found the annotation consensus is of 0.86 kappa (Cohen J, 1960). One empirical question could be raised here that co-sine similarity based pruning is a biased method, whereas empirically there are countless ways to express same meaning with different set of words (synonyms). To make sure we thoroughly analyzed our left out part of the corpora (left out after cosine pruning) and found only handful cases (3-4%) where people use different word-ings altogether. The annotation process produced a set of 804 tweet pairs, among them 350 tweet pairs were found as entailed and 454 tweets pair annotated as negative cases. The exact distribution of the 1 http://www.tweetarchivist.com different PTE classes in the annotated data is shown in following table 1. TWT pairs PTE types type 01 type 02 type 03 type 04 804 350 (43.5%) (11.69%) (9.20%) 286 (35.57%) Table 1: Distribution of tweet pairs in PTE clas-ses It could be noticed that there are significant presence of PTE 2 and 3 classes in</s>
|
<s>the real cor-pus, whereas the majority class is till the direct entailment case. Now an argument could be raised that why these negative examples i.e. PTE-04 type is so essential to include. The ra-tionale is, these negative examples are so im-portant because this is the exclusion set made by annotators despite of high cosine similarity value with their peers. The average cosine similarities score of the negative examples are 0.25 and for PTE-03 is 0.35. Ranges and average cosine simi-larity scores on the golden set is reports in the Table 2. For example: বৃহস্পতি ও ররোববোর হরিোল রেকেকে জোমোয়োি ENG: Thursday and Sunday Jamayet called strike. তিরোজগকে জোমোয়োকির তনরুত্তো হরিোল ENG: Jamayet called strike is peacefully in Shirajganj. Cosine similarity: 0.516 SN Types Cosine Similarity Ranges Avg. 1 Entailed > 0.70 0.70 2 Not-Entailed < 0.70 0.35 3 PTE-type 1 > 0.70 0.70 4 PTE- type 2 0.40 - 0.69 0.46 5 PTE- type 3 0.30 - 0.39 0.35 6 PTE- type 4 < 0.30 0.25 Table 2: Ranges of cosine similarity scores 3 Bengali WordNet WordNet is a lexical semantic network to hold semantic relations like synonyms and word-senses as the nodes of the network and relations of the synonyms and word-senses are the edges of the network. In WordNet, meaning of each word is represented by a unique word-sense and a set of its synonyms called synset. We have col-lected the Bengali WordNet developed by Das and Bandyopadhyay as described in (Das and Bandyopadhyay 2010), consists total 12K num-bers of synsets. 3.1 Pre-Processing Text pre-processing is a vital pre-requisite while working with noisy social media text. Pre-538processing involves splitting tweet into valid to-kens: words and symbols, stemming, moving out stop words and part-of-speech tagging. The CMU tweet tokenizer (Gimpel et al., 2011) has been used here. Although it is primarily devel-oped for English but also works well for other languages like Bengali. We used the Bengali stop word list, made available publicly by ISI Kolka-. For the POS tagging the system developed by (Dandapat et al., 2007) has been used. Alt-hough the POS tagger is not trained on social media text and accuracy of the tagger on tweet has not been measured. This is something we would like to do next. To trim all the surface word forms into corre-sponding root we developed one simple rule based Bengali Stemmer. Our stemmer concen-trated on framing rules for stemming word cate-gories like noun, verb adverb and adjectives. To frame the rules for stripping suffixes and prefixes we drew inspirations and knowledge from (Dash, 2014) and (Das and Bandyopadhyay, 2010). 3.2 Similarity Computation We devised two kinds of similarity measurement methods for word level then accumulated those word-level similarities to sentence level. 3.2.1 Computation of Word Similarity Study from different psychological experiments demonstrates that semantic similarity is obvious-ly context-dependent (Medin et al., 1993), (Tversky, 1977). Meaning of a word in sentence is context-dependent, which effects semantic similarity. For example, খোওয়োর আকগ হোি ভোকলো েকর ধুকয় রনকব ENG: Before the meal, wash</s>
|
<s>hands properly তরয়োনজু এর হিযোেোকে ওর ও হোি তেল ENG: He was also involved in the Riyanuj mur-der case. Two above cited sentences have a common word ―হোি/hand‖, but the word meaning is differ-ent in two sentences. In the first sentence ―হোি‖ implies a part of human body and in 2sentence ―হোি‖ implies association/involvement in one event. For the semantic similarity calculation among two given words w1 and w2, we computed a sca-lar distance of these words in the meaning-spaces based on the synsets of these words extracted from the WordNet. If w1 and w2 both belong to same sysnset i.e.w1 is a synonym of w2 or vice versa, then the distance (d) between w1 and w2 is 2http://www.isical.ac.in/~clia/resources.html 0 and the semantic similarity score is 1, other-wise, the distance (d) between w1 and w2 is 1 and semantic similarity score is 0. Sim (w1, w2) = 1)d (if0)0(1 dif (1)For example: w1: অতভজ্ঞ (Experienced) w2: োরদর্শী (Expert) Calculated semantic similarity score is 1. 3.2.2 Sentence Similarity Computation For the sentence level similarity calculation we performed two sets of experiments. One with fine-grained entailment PTE classes i.e. the 4 classes and the other is a binary classification task: entailed or not entailed. To determine the semantic similarity score of two given tweets A and B, we first pre-processed the tweets as described in the section 2.2 and cal-culated the length of tweets. Say, x is the length of tweet A and y is the length of tweet B. Then a semantic similarity matrix R[x,y] has been de-veloped of each pair of words wi and wj where i and j are the indices of words. If a word at any position in A is not available in the WordNet, we computed the word similarity based on presence of same word in B. If such a word from A gets complete word match with any word in B, then similarity score is 1 between the words else 0. For example names and abbreviations like ি. ো (Samajbadi Party), তবকজত (BJP) which are the ab-breviations of political party name, are not avail-able in WordNet. Their similarity measured based on character matching of each word in the tweets. Every token of tweet A represents a row and every token of tweet B represents a column in the semantic similarity relative matrix R[x,y].Figure 1 1illustrates an example similarity matrix repre-sentation of two example tweets as cited below. Each cell represents the word level similarity scores. For example: িোঈদীর আমিুৃয েোরোদণ্ড প্রদোন েরোয় হরিোল রেকেকে জোমোয়োি ENG: Jamayet called strike on the lifetime imprisonment issue of Sighdi. জোমোয়োকির বনধ চলকে, িোঈদীর আজীবন েোরোদণ্ড রদওয়োর প্রতিবোকদ ENG: Jamayet‘s strike is going on, in protest of Sighdi‘s lifetime imprisonment. Computed semantic similarity score is 0.923 539Figure1: Semantic similarity matrix between tweets. Matching weight of tweet A computed by summing all the row wise cell weight and Match-ing weight of tweet B computed by summing all the column wise cell weight. In above cited ex-ample matching weight of both</s>
|
<s>tweet A and B is 6. Following formula is used to determine the semantic similarity score between tweet A and tweet B. (2) An important point is that the proposed simi-larity value is based on each of the individual word similarity values, so that the overall simi-larity always reflects the influence of each word and its senses. According to the proposed seman-tic similarity score formulation, similarity values ranges from 0 to 1. If all the words of tweet A get semantically similar to all the words in tweet B, score will be 1, and will be 0 if there is no match. 4 Performance System performance has been evaluated in two folds: with the binary (entailed or not) classes and with the fine-grained PTE classes. For per-formance evaluation we measured similarity score of all the tweet pairs in a class. Then exper-imentally, we set threshold to achieve optimum accuracy for each class. Decided threshold val-ues are reported in the table 3. SN PTE Threshold Range 1 Entailed > 0.75 2 Not-entailed < 0.75 3 Type 1 > 0.75 4 Type 2 0.2 - 0.29 5 Type 3 0.3 - 0.74 6 Type 4 < 0.2 Table 3: Threshold values of semantic similarity for Bengali tweets. Accuracy results of our proposed system on bi-nary class and fine-grained classes considering the pre-set threshold values are reported in Table 4 and 5. Types Precision Recall F1 Entailed 98.23 63.42 77.08 Not-Entailed 77.85 99.11 87.2 Avg. 88.04 81.265 82.14 Table 4: Performance on binary entailment classes PTE classes Precision Recall F1 PTE- 01 98.23 63.42 77.08 PTE- 02 26.15 36.17 30.35 PTE- 03 16.54 60.81 26.01 PTE-Type 04 86.36 53.14 65.8 Avg. 56.82 53.385 49.81 Table 5: Performance on the PTE classes We setup another experiment on English tweets to evaluate the proposed approach and for the purpose of comparison. From SemEval 2015 task 1, we collected POS tagged corpus of tweet pairs. We involved two human annotators and tagged 639 tweets pair according to the PTE classes. To measure inter-annotator agreement, randomly 100 tagged pairs have been chosen. We found inter-annotator agreement is 0.709. Detail distribution of the tweet pair according to PTE classes is shown in table 6. TWT pairs PTE types type 01 type 02 type 03 type 04 639 (7.5%) (9.5%) (12.9%) 447 (69.95%) Table 6: English tweet pairs in PTE classes Then we applied our proposed algorithm to determine the semantic similarity using English WordNet (Boyd-Graber et al., 2006). All the POS tagged tweets are pre-processed by remov-ing stop words and lemmatization (Manning et al, 2014). System performance on these English tweet pairs measured in two folds: binary classes and fine-grained PTE classes. For each fold we achieved optimum accuracy with the pre-defined threshold values as mentioned in the table 7. SN PTE-Type Threshold Range 1 Entailed > 0.65 2 Not-entailed < 0.65 3 Type 1 > 0.65 4 Type 2 0.5 to 0.64 5 Type 3 0.4 to 0.49 6 Type 4 < 0.4 Table 7: Threshold ranges</s>
|
<s>for Eng. tweets. 3 http://alt.qcri.org/semeval2015/task1/ 4 http://wordnetcode.princeton.edu/standoff-files/core-wordnet.txt 5 http://www.lextek.com/manuals/onix/stopwords1.html িোঈদী Sighdi আমিুৃয Life- Time েোরোদণ্ড Impris-onment প্রদোন Anou-nced হরিোল Strike জোমোয়োি Jama-yet জোমোয়োি Jama-yet 0 0 0 0 0 1 বনধ Strike 0 0 0 0 1 0 িোঈদী Sighdi 1 0 0 0 0 0 আজীবন Life-time 0 1 0 0 0 0 েোরোদণ্ড Impris-onment 0 0 1 0 0 0 রদওয়ো Anno-unce 0 0 0 1 0 0 প্রতিবোদ Prot-est 0 0 0 0 0 0 540Performance of the proposed system on the SemEval English tweets is reported in the Table 8 and 9. Types Precision Recall F1 Entailed 22.75 79.16 35.34 Not-Entailed 97.88 78.17 86.92 Avg. 60.32 78.67 61.13 Table 8: Performance on the binary entailment classes for English tweets PTE classes Precision Recall F1 PTE- 01 31.40 79.16 44.97 PTE- 02 14.28 16.39 15.26 PTE- 03 13.63 14.45 14.03 PTE-Type 04 94.58 66.44 78.05 Avg. 38.47 44.11 38.07 Table 9: Performance on the PTE classes for English tweets Results on English tweets are directly compa-rable with (Xu et al., 2014), named as MULTIP, make use of features like string comparison, POS and topic words. The reported final accuracy was 71.5 (F-Measure), whereas feature ablation shows string + POS features achieved 49.6 (F-measure), is directly comparable with our sys-tem‘s result: 61.13 on binary classes, while our system is only using WordNet based lexical fea-tures. Performance degradation on fine-grained classes is quite natural NLP phenomena. Integra-tion of POS and topic words feature into our sys-tem could be straight-forward but extracting those features for Bengali tweets, demands re-search endeavors as those NLP tools are unavail-able presently for the language. 5 Baseline System and Performance SN PTE Threshold Range 1 Entailed > 0.75 72.89 2 Not-entailed < 0.75 84.7 3 Type 1 > 0.75 73.48 4 Type 2 0.2 - 0.29 4.60 5 Type 3 0.3 - 0.74 11.9 6 Type 4 < 0.2 75.87 Table 10: Baseline system Performance on the PTE classes for Bengali tweets. We have developed a very basic system to cate-gorize Bengali tweets according to the defined PTE classes. Two tweets compared using only word matching and without WordNet infor-mation. This simple method returns a similarity score among two tweets. We calculated similari-ty score for all the PTE class tweets and experi-mentally set threshold for each class to achieve highest accuracy. Threshold values for each class and the accuracy of the system reported in table 10. Performance of the proposed system over the baseline system shows better accuracy and also clarifying the fact that PTE recognition is more challenging than the classical unidirectional tex-tual entailment recognition. 6 Discussion System‘s poor performance on the fine-grained classes is a natural phenomenon for any NLP system. This is an ongoing work. Here in this section we are discussing on challenges related with the PTE classes. Let us first explain why PTE classes identifi-cation is required. Common information bounda-ry detection is essential for various applications for example multi-document summarization (MDS). A MDS needs to remove common in-formation chunks before</s>
|
<s>the aggregation. Indeed automatic PTE detection for social media text is a challenging problem. Moreover additional NLP resources for a resource scarced language like Bengali are not well developed. Looking at the error types we decided to go for a system can take both the feature input: lexical and syntactic, but dependency parser develop-ment for Bengali tweets is a separate problem altogether. Confusion matrix is drawn for Bengali tweets (Figure 2) and English tweets (Figure 3) to un-derstand overlap between PTE classes and it has been observed PTE02-PTE03 are closely over-lapped with each other on both the data set. System tagged PTE PTE PTE 03 PTE Total PTE 01 222 03 125 0 350 PTE 02 2 34 43 15 94 PTE 03 2 18 45 9 74 PTE 04 0 75 59 152 286 Total 226 130 272 176 804 Figure 2: Confusion matrix for Bengali tweets. System tagged PTE PTE PTE PTE Total PTE 01 38 8 1 1 48 PTE 02 41 10 7 3 61 PTE 03 42 16 12 13 83 PTE 04 46 36 68 297 447 Total 167 70 88 314 639 Figure 3: Confusion matrix for English tweets. 5417 Related Works Automatic detection of textual entailment is a well-studied discipline, but most of the endeav-ors so far concentrated on English, almost no work on Indian languages especially on Bengali. There are many approaches to measure semantic similarity of words and sentences based on sim-ple organizational schemes like Dictionary to complex organizational schemes like WordNet [Fellbaum, 2010] and ConceptNet [Liu et al., 2004]. The model proposed by [Tversky, 1977] is one of the early works in this area. Technically these methods can be categorized into two groups: edge counting-based (or dic-tionary/thesaurus-based) methods and infor-mation theory-based (or corpus-based) methods (Li et al., 2003). Among two approaches, very less research work done on edge counting based method. Rada et al. (Rada, R et al., 1989), pro-posed a metric called distance, which determines the average minimum path length over all pair wise combinations of nodes between two subsets of nodes. Distance measure has been used to as-sess the conceptual distance between sets of con-cepts when used on a semantic net of hierarchical relations and represents the relatedness of two words Due to the specific applications of edge count-ing based method like medical semantic nets (Li et al., 2003), most of the research on semantic similarity followed information theory based method (Resnik, 1993a) work is the first work on information theory based system which proposed modeled the selectional behavior of a predicate as its distributional effect on the conceptual clas-ses of its arguments. This model experiment re-sult suggests that many lexical relationships are better viewed in terms of underlying conceptual relationships. In a later work (Resnik, 1993b) focuses on two selectional preferences and se-mantic similarity as information-theoretic rela-tionships involving conceptual classes and demonstrates the applicability of these relations to measure semantic similarity between two words. A model proposed by (Lee et al., 1993) also measured the distance</s>
|
<s>of the nodes using edge weights between adjacent nodes in a graph as an estimator of semantic similarity. The work by (Richardson et al., 1994) has proposed a WordNet based scheme for Hierarchical Concep-tual Graphs (HCG) to measure semantic similari-ty between words. System proposed by (Li et al., 2006), uses a semantic-vector approach to meas-ure sentence similarity. Sentences are trans-formed into feature vectors having individual words from the sentence pair as a feature set. System proposed (Liu et al., 2008) an approach to determine sentence similarity, which takes into account both semantic information and word order. They define semantic similarity of sen-tence 1 relative to sentence 2 as the ratio of the sum of the word similarity weighted by infor-mation content of words in sentence 1 to the overall information content included in both sen-tences. The method proposed by (Liu et al., 2013) presents an information theory based ap-proach of calculating the similarity between very short texts and sentences using WordNet, com-mon-sense knowledge base and human intuition. For Bengali text the work by (Sinha et al., 2012) design and develop a Bangla lexicon based on semantic similarity among Bangla words from Samsad Samarthasabdokosh. The lexicon is hier-archically organized into categories and sub-categories. The words are grouped into clusters along with their synonyms. Weighted edges be-tween different types of words related to same or different concepts or categories exist, denoting the semantic distance between them. (Sinha et al., 2014) proposed a hierarchically organized semantic lexicon in Bangla and also a graph based edge-weighting approach to measure se-mantic similarity between two Bangla words. Our work is on information theory based method rather edge counting based method. Edge counting method is expedient for particular ap-plications with constrained taxonomies (Li et al., 2003). In this paper, our work explains an ap-proach to determine semantic relatedness be-tween any two tweets. 8 Conclusion and Future Work This paper presents an initial approach to measure semantic similarity between two Benga-li tweets, based on the words meanings. Bengali tweets are less noisy in nature compared to Eng-lish. In general people do use less abbreviated forms (‗gr8‘ for great), word play (‗goooood‘ for good) and etc., but Romanization / transliterated writing and code-mixing is very much prominent in Indian social media. Even romanization of Indian languages has no writing standard. People are literally whimsical about spelling over social media; for example pyari (beloved) could be written in various phonetically similar spellings: pyaari, payari, piari, and etc. We are currently working on PTE detection on code-mixed Ben-gali tweets. 542References A Das and S. Bandyopadhyay, "Morphological Stemming Cluster Identification for Bangla", In Knowledge Sharing Event-1: Task 3: Morphologi-cal Analyzers and Generators, Mysore, January, 2010. Boyd-Graber, J., Fellbaum, C., Osherson, D., and Schapire, R. (2006). "Adding dense, weighted con-nections to WordNet.'' In: Proceedings of the Third Global WordNet Meeting, Jeju Island, Korea, Jan-uary 2006. Cohen J.,‖A coefficient of agreement for nominal scales‖. Educational and Psychological Mea-surement.1960; 20 (1):37–46. D.L. Medin, R.L. Goldstone, and D. Gentner, ―Re-spects for Similarity,‖ Psychological Rev., vol. 100,</s>
|
<s>no. 2, pp. 254-278, 1993. Dandapat S., Sarkar S. and Basu A ―Auto-matic Part-of-Speech Tagging for Bengali: An approach for Morphologically Rich Languages in a Poor Re-source Scenario‖. In Proceedings of the Associa-tion of Computational Linguistics (ACL 2007), Prague, Czech Republic. Das, A. and Bandyopadhyay, S. ―Semanticnet-perception of human pragmatics‖. In Proceedings of the 2nd Workshop on Cognitive Aspects of the Lexicon, pages 2–11, Beijing, China, 2010. Fellbaum, C. (2010). "WordNet." Theory and Appli-cation of Ontology: Computer Applications. New York: Springer, 231-243. Hongzhe Liu, Pengfei Wang, "Assessing Sentence Similarity Using WordNet based Word Similarity", JOURNAL OF SOFTWARE, VOL. 8, NO. 6, JUNE 2013. Kevin Gimpel, Nathan Schneider, Brendan O'Connor, Dipanjan Das, Daniel Mills, Jacob Eisenstein, Mi-chael Heilman, DaniYogatama, Jeffrey Flanigan, and Noah A. Smith, ―Part-of-Speech Tagging for Twitter: Annotation, Features, and Experiments‖, In Proceedings of ACL 2011. Lee, J., Kim, M., and Lee, Y. (1993). ―Information retrieval based on conceptual distance in is-a hier-archy‖. Journal of documentation, 49(2):188–207. Liu, H. and Singh, P. (2004). ―Conceptnet—a practi-cal commonsense reasoning tool-kit‖.BT technolo-gy journal, 22(4):211–226. Manjira Sinha, Abhik Jana, Tirthankar Dasgupta, Anupam Basu, "A New Semantic Lexicon and Similarity Measure in Bangla", Proceedings of the 3rd Workshop on Cognitive Aspects of the Lexi-con (CogALex-III), pages 171–182,COLING 2012, Mumbai, December 2012. Manjira Sinha,Tirthankar Dasgupta, Abhik Jana, Anupam Basu, "Design and Development of a Bangla Semantic Lexicon and Semantic Similarity Measure", International Journal of Computer Ap-plications (0975 – 8887), Volume 95– No.5, June 2014. Manning, Christopher D., Surdeanu, Mihai, Bauer, John, Finkel, Jenny, Bethard, Steven J., and McClosky, David. 2014. The Stanford CoreNLP Natural Language Processing Toolkit. In Proceed-ings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstra-tions, pp. 55-60 Niladri Sekhar Dash, "A Descriptive Study of Bengali Words" Cambridge University Press, December 2014. Rada, R., Mili, H., Bicknell, E., and Blettner, M. (1989). ―Development and application of a metric on semantic nets‖. Systems, Man and Cybernetics, IEEE Transactions on, 19(1):17–30. Resnik, P. ―Selection and information: a class-based approach to lexical relationships‖. IRCS Technical Reports Series, page 200, 1993. Resnik, P. ―Semantic classes and syntactic ambigui-ty‖. In Proc. of ARPA Workshop on Human Lan-guage Technology, pages 278–283, 1993. Richardson, R., Smeaton, A., and Murphy, J. (1994). ―Using wordnet as a knowledge base for measuring semantic similarity between words‖. Technical re-port, Technical Report Working Paper CA-1294, School of Computer Applications, Dublin City University. Sumit Bhagwani, Shrutiranjan Satapathy, Harish Kar-nic, "sranjans: Semantic Textual Similarity using Maxi mal Weighted Bipartite Graph Matching", First Joint Conference on Lexical and Computa-tional Semantics (*SEM), pages 579–585, Montr´eal, Canada, June 7-8, 2012. Tversky, A. (1977). ―Features of similarity‖. Psycho-logical review, 84(4):327. Xiao-Ying Liu, Yi-Ming Zhou, Ruo-Shi Zheng. ―Measuring Semantic Similarity Within Sentenc-es‖. Proceedings of the Seventh International Con-ference on Machine Learning and Cybernetics, Kunming, 2008. Xu, W., Ritter, A., Callison-Burch, C., Dolan, W. B., and Ji, Y. (2014). Extracting lexically divergent paraphrases from Twitter. Transactions of the As-sociation for Computational Linguistics (TACL), 2(1). Yuhua Li, David McLean, Zuhair A. Bandar, James D. OShea, and Keeley Crockett. ―Sentence Similar-ity Based</s>
|
<s>on Semantic Nets and Corpus Statistics‖. IEEE Transections on Knowledge and Data Engi-neering, Vol. 18, No. 8, 2006. 543Yuhua Li, Zuhair A. Bandar, and David McLean, "An Approach for Measuring Semantic Similarity be-tween Words Using Multiple Information Sources", IEEE Transactions On Knowledge And Data Engineering, vol. 15, no. 4, july/august 2003 544</s>
|
<s>Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000Enhancing the Performance of Semantic Search in Bengali using Neural Net and other Classification Techniques Arijit Das1, Diganta Saha1 1Department of Computer Science and Engineering, Faculty of Engineering and Technology, Jadavpur University, Kolkata, West Bengal, 700032 INDIA Corresponding author: Arijit Das (e-mail: arijit.das@ieee.org). Abstract Search has for a long time been an important tool for users to retrieve information. Syntactic search is matching documents or objects containing specific keywords like user-history, location, preference etc. to improve the results. However, it’s often possible that the query and the best answer have no term or very less number of terms in common and syntactic search can’t perform properly in such cases. Semantic search, on the other hand, resolves these issues but suffers from lack of annotation, absence of WordNet in case of low resource languages. In this work, we have demonstrated an end to end procedure to improve the performance of semantic search using semi-supervised and unsupervised learning algorithms. An available Bengali repository was chosen to have seven types of semantic properties primarily to develop the system. Performance has been tested using Support Vector Machine, Naive Bayes, Decision Tree and Artificial Neural Network (ANN). Our system has achieved the efficiency to predict the correct semantics using knowledge base over the time of learning. A repository containing around million sentences, a product of TDIL project of Govt. of India, was used to test our system at first instance. Then the testing has been done for other languages. Being a cognitive system it may be very useful for improving user satisfaction in e-Governance or m-Governance in the multilingual environment and also for other applications. Keywords: Semantic Search, Deep Learning, SVM, Naive Bayes, Neural Network, Decision Tree I. INTRODUCTION Semantic Search has been around for quite some time and has gained widespread use due to its applications and promising results. Most of the developing countries are multilingual. The emerging economies of the world also use more than one official language for communication. India, the largest multilingual democracy, has 22 languages which have official recognition in the constitution and gets encouragement from the government to promote. In India, there are also 122 languages which are being spoken by more than ten thousand people and defined as major languages. Except these 1599 other languages also exist in India which are used by a very small portion of the population. India has seventy percent rural population and the majority of them are only proficient in their mother tongue. They prefer to use native language over the internet or in another way it can be said that they use the internet more for all e-Governance application if the content is available in their mother language. Search is one of the major operations which is done frequently by internet users. Let’s see some case studies where “Semantic Search” or “Contextual Meaning” prevails in case of different language domain. Let some Bengalee person (people of West Bengal, India or</s>
|
<s>Bangla Desh whose mother tongue is Bengali) needs to reset his watch, so he wants to know the accurate time over the web and gives a search "কটা বাােজ?”/katā bāje?/ ”What is the time now?” As of 06.07.2019 at 15:43 google, bing, yahoo all fail to give the answer either they are showing blank result or giving some pages which have the term "কটা বাােজ? /katā bāje?/”. But the searcher who does not know the English language (let) wants to know the time, so search result should include local time, GMT etc. The search engine needs to understand the meaning or context of the searchers’ search query. For a smart search engine such queries should point to the same answer for the query “what is the time now?” but search engines fail to understand the meaning of the query, therefore, cannot retrieve the current local time or Greenwich Mean Time. Citizens’ feedback is one of the most important pillars of good governance. E-governance makes the task of giving feedback easy and affordable. Giving input in the native language is easy nowadays with the soft keyboard available in their native languages. But if a question is asked in the Nepali language and the answer is present in the Portuguese language, the system fails to retrieve the mailto:arijit.das@ieee.org A Das and D Saha: Enhancing the Performance of Semantic Search in Bengali using Neural Net and other Classification Techniques result. As a case study, suppose a farmer of Darjeeling district of West Bengal is asking a question over the internet about the orange farming in Nepali and the answer is already present in Portuguese. Due to the lack of common words the search engine fails to populate the correct answers. Meaning of the word changes with its use in the sentence in any language. Word Sense Disambiguation (WSD) is used to differentiate the actual meaning of the same word used differently in different texts. For example, the word "Bank" represents different meaning leading to different senses in various contexts or senses. The word "Bank" can point a financial organization, a riverside or seaside, a proper noun, a common noun even a verb or an adverb. It is difficult for a machine to differentiate the context which is easy for a human being with his or her innate linguistic intelligence. The branch of Word Sense Disambiguation (WSD) focuses on this challenge where system or machine is trained in such a way that it becomes able to differentiate the meaning of the same word used in different contexts. The method of learning for a machine may be statistical formulae or grammatical rules or consultation of dictionaries or WordNet for the meaning or sense of neighbor words. The way of machine learning to predict the semantics can be supervised, unsupervised or semi-supervised. In case of supervised learning, the system predicts based on some predefined rules. Thus framing these rules is tedious and time-consuming. In the case of unsupervised learning, the system learns to predict from the</s>
|
<s>past prediction and accuracy increases over time. It uses a series of statistical algorithms. The result of the system improves over time and not accurate in the first instances. But here human labor and time consumption are much less. Semi-supervised technique tries to take advantage of both the “supervised” and “unsupervised” method. We have taken a repository of nearly one million sentences, which was taken from ISI Kolkata funded by Ministry of Electronics and IT, Govt. of India as a project named TDIL (Technology Development for Indian Languages). If the answers are available for the query, it was returned, no matter whether there is any common term between question (sentence-1) and the retrieved answer (sentence-2) or not. Two examples of sentence-1 or the questions are: ােক এবার আইিপএল এ সবােচােয় ােবশী রান কােরােছ? (/ke ebar IPL e sabcheye beshee run korechhe?/ or “Who has scored highest in this IPL?”) র োনোল্ডো র োন ফুটবল ক্লোব এ সোল্ে যুক্ত? (/Ronāldo kon football club er sāthe jukta?/ or “With which football club Ronaldo is associated?”) Our system is returning the correct answer. A set of 250 questions were fired and answers were collected and the final result has been evaluated by the experts. II. RELATED WORK [1] contains a review of various measures for semantic similarities and review of various measures, such as- art measurement, feature-based measurement, measurement- based on length of path, information-based solution etc. Iglesias et al. (2018) proposed method ‘wpath’ to combine the two conventional measuring techniques: information content and path length based measure to measure semantic similarity in Knowledge based Graphs (KGs) and DBPedia. The proposed method has an improvement than other measuring methods when performed over a well known dataset [2]. Semantic similarity plays a major role for the retrieval of information and web mining to retrieve semantically similar documents with the query submitted by the user [3]. The proposed method uses synset, a new method to calculate the similarity between terms, where online resources are used to derive the synsets. The benefit of the introduced work is that, semantic equivalence is computed among words, which helps to convert a query with query suggestions or most suitable queries. Some meaningful similarity and related methods have already been developed. Various similarities and related methods have been proved useful in certain applications of computational intelligence. These methods are generally classified into four groups: path-length-based methods, depth based measure, feature-based methods and information-based methods. Path-length-based approach is another natural and direct way to evaluate meaningful similarity in ontology. Depending on the length of the path, the representative path involves the least path in the meaningful similarity measures. Wu and Palmer measure [4], Leacock and Chodorow measure [5] are some examples of path length based measure. Meaningful similarity measurements based on feature uses more meaningful knowledge than path-length-based method. In addition, feature-based remedies evaluate the difference in the comparison of concepts in generality and ontology, and it is derived from the Tversky similarity model [6] in set theory. The information theory-based</s>
|
<s>method for semantic similarity was first proposed by [7]. Sahni et al. [8] introduced a method in 2014 for measuring semantic similarity of English words. The adopted measures were employed and learned using support vector machines. Jin et al [9] proposed a comprehensive metric of similarity, a method of relatedness measure and a comprehensive degree measure that combines semantic similarity and relatedness between two concepts. III. PROBLEM STATEMENT Technically the problem can be split into a. Processing the query. b. How to determine the query type. A Das and D Saha: Enhancing the Performance of Semantic Search in Bengali using Neural Net and other Classification Techniques c. Determining the class and subclass of the answer of query from the repository. d. Correlating the semantic similarity of query and predicted answers. e. Conflict resolution, in case more than one class is predicted. f. Extraction of answer from the class or sub classes of the repository. g. Composing more than one sentence if the answer lies with more than one sentence. IV. PROPOSED APPROACH We are proposing an approach to predict the search result by processing the query using various NLP techniques and then using a series of classifiers to filter into specific portion of the repository where the answer is available. We have taken seven broad classes of sentences as repository, namely: 1. art & culture, 2. Economics, 3. entertainment, 4. Literature, 5. Politics, 6. Sports, and 7. Tourism. The corpus which has been used in this work is developed under the Technology Development for Indian Languages (TDIL) project of Ministry of Electronics and IT, Govt. of India (Dash 2007) and shared by the Language Research Unit of ISI Kolkata. This corpus which is a size of 11300 A4 size pages, having 271102 numbers of sentences and 3589220 numbers of words, covers 50 different text categories like Agriculture, Child Literature, Physics, Math, Science etc. The input queries are passed through a set of annotation procedures, like processing of punctuation symbols, uneven spaces, similarization of font, amendment of foreign words by equivalent words in mother language etc. Punctuation marks were taken into account to predict the type of query like Declarative, Imperative, Interrogative or Exclamatory. Then the query is processed to get the parts of speech (POS) of each word using POS tagger of the LTRC, IIIT Hyderabad. The algorithm “Das and Halder” has been used to predict the root form of the verb in the query, tense (present, past or future), person (like 1st, 2nd or 3rd person) etc. In parallel, our classifier system is trained with already categorized sentences, so that it can predict the category or type of input query correctly. For accurate result four different types of algorithm have been used namely Naive Bayes Probabilistic Model, Support Vector Machine (SMO), Artificial Neural Network (ANN) (multilayer perceptron) and Decision Tree (J48). This process was followed upto n level recursively that is upto reaching the level of sub-class to a sentence level atomicity. This model can generate two type of</s>
|
<s>ambiguity. First, when a same sentence or query is predicted as different categories by different algorithms in a single run, i.e. “Smriti Irani who is a Bollywood actress, came into politics in 2003”. This sentence can be classified as both “art & culture” and “politics”. It was resolved based on weighted average i.e. every result of different algorithms was given a weight of ¼ or 0.25 when a specific category is getting more than 0.50 weight that category is being chosen; if four different category is chosen by four different algorithms (rare case occurred only once) we kept all the predictions. Second kind of ambiguity is- suppose a sentence is classified as class A by only one algorithm and remaining three algorithms are giving ‘NULL prediction’ or ‘can’t be predicted’ or only two algorithms are predicting but as two different classes- class A and class B. That means, when the total weight of prediction does not cross 0.50, then we took the only prediction for the first scenario and both of the predictions for the second scenario and passed it for the next level. Briefly, the algorithms, used for classification are: A. NAIVE BAYES Subject(s) of a sentence, object(s) of a sentence, Term Frequency, Inverse Document Frequency, length of the text object, dimensionality, entropy, keywords, tense, gender, person, number of subjects and objects, number of Functional Words and Number of Content Words are used as features or attributes for classification using Naive Bayes in our experiment. As an example, if a Naive Bayes classifier is expected to classify the apples and oranges, then it will apply different attributes of the training set one by one to classify or distinguish them. Suppose it would apply shape first, but apples and oranges both are almost round. Then it would apply color and as apple is red and orange is having different color, it would be able to classify them separately. In case of same color surface texture may be used as attribute as well. Using Bayes theorem of the conditional probability Naive Bayes classifier classifies objects. Here each object is converted to a vector using Word2Vec algorithm. At first the Naive Bayes classifier assumes that all the objects are independent. Then the training is done using the training set and thus the classifier learns to classify the objects. It gives certain numeric value to each object. Those values are the probabilities of the objects to be the member of a certain class based on Bayesian conditional probability. Ultimately the object is assigned to that class in which it gets the maximum probabilistic value. When a new data point is added, the probabilities are recalculated and adjusted. Assuming that each attribute or feature xi is independent of any other attribute or feature xj for j not equals to i, given the category Ck p(xi|xi+1,…,xn,Ck) = p(xi|Ck). Thus joint probability written as: A Das and D Saha: Enhancing the Performance of Semantic Search in Bengali using Neural Net and other Classification Techniques B. SUPPORT</s>
|
<s>VECTOR MACHINE Support Vector Machine or SVM can classify with both linear and non linear classifier. SVM tries to find out hyper plane with maximum margin between sets of objects. Hyper plane is drawn from the knowledge of training set and its associated vector. The training tuples which fall on the hyper plane is known as support vector. It may also possible that the data are linearly inseparable i.e. with the plane, it's impossible to classify the data objects. In such scenario original input data is transformed into some higher dimensional space using mapping technique and the transformed data in the higher dimensional space becomes separable with hyper plane. In our experiment first the system was trained with the tagged or classified texts and the SVM classifier model is generated. Then the model is used to classify the test objects with k-1 sets randomly as training set and remaining kth set as testing set. Then the result of classification is tested in average. Subject(s) of a sentence, object(s) of a sentence, Term Frequency, Inverse Document Frequency, length of the text object, dimensionality, entropy, keywords, tense, gender, person, number of subjects and objects, Function Words and Content Words are used as features or attributes for classification using SVM model. FIGURE 1. Decision Tree C. DECISION TREE Decision Tree forms a logical representation to take decision in the tree structure under different conditions from the training set. In the training set, first different attributes are identified. Thereafter depending on 'True' and 'False' value of those attributes directly and in nested branches, objects of the training set are categorized in different classes. This predictive model is then used to classify the test data. In the Figure-1, a decision tree is shown which is formed from the data set of passengers after 'The Titanic Mishap'. The classifier has formed a tree with boolean value associated with each branch and nodes or attributes are selected as sex (male - yes or no), age range etc. with the probability of survival. This tree is used to predict the object of a test dataset to determine whether it should survive or not. In our experiment training data set (9 folds) generates a model or tree with boolean value associated with each branch depending upon various attributes like subject(s) of a sentence, object(s) of a sentence, Term Frequency, Inverse Document Frequency, length of the text object, dimensionality, entropy, keywords, tense, gender, person, number of subjects and objects etc. Then the test set (10th) is tested and test is shuffled in various iterations. Decision Tree learns from the percentage value of the training set and applies it as probabilistic value to determine the class of test set. FIGURE 2. Artificial Neural Network D. ARTIFICIAL NEURAL NETWORK Conceptualized to mimic, the learning pattern of brain and to improve the learning rate over numbers of iterations, Artificial Neural Network is designed. From 2012 it started to get huge popularity with the multilayered feed forward network and recurrent neural network and advanced,</s>
|
<s>scalable, distributed GPUs and thus initializing deep learning by automating the load of feature extraction. In different machine learning challenges like speech recognition, pattern recognition it showed almost 15 to 20 percent improvement with respect to traditional statistical methods. A Das and D Saha: Enhancing the Performance of Semantic Search in Bengali using Neural Net and other Classification Techniques ANN takes considerably long time to learn from the input data set depending upon the size of the dataset. The main advantage of ANN is- it can correct itself to improve the accuracy with the number of iteration. There are input layers, hidden layers and output layers. Where input layers get direct input and output of the output layer is treated as the final output of the ANN. Hidden layers don't get direct input from the outside environment rather it takes input from other layers of the ANN typically other hidden layer or input layer. Number of hidden layers also depends upon the design. FIGURE 3. Das and Halder Algorithm V. METHODOLOGY 1. Call the Shallow Parser to get the Parts of Speech (POS) of each words of the input query. 2. Mark Function Words and Content Words in the input sentence. 3. Use “Das and Halder” algorithm to extract the root form of the verb [Fig. 3] 4. Use WordNet to get the synonyms and to enhance the dimensionality of the input vector. 5. Classify the query to one of the seven classes using Naive Bayes, ANN, SVM and Decision Tree algorithm. 6. Apply Step-5 recursively to determine the subclass. 7. Hit the target sentences and use knowledge base to extract the answer in desired format. 8. Return the result to the user. The detailed flowchart of our work is depicted in Figure-5. Use of Shallow Parser and Extraction of Root Verb are two separate works, detail of which has not been described in the flowchart. Shallow Parser developed by LTRC group of IIIT Hyderabad, is used and thus the algorithm of POS tagging is not covered in this paper, acknowledgement has been given at last of this paper and IIIT Hyderabad has also been informed about the percentage of improvement in the result because of the POS tagger. Features like Function Words(FW), Content Words(CW), number of FW and number of CW, subjects, objects and their numbers are extracted using Shallow Parser. The detail of Root Verb Extraction algorithm is given in the Das and Halder Algorithm (Figure 3) which is actually used to extract the feature like person, number, gender, tense of subjects and objects. We have used the Java language for implementation of supervised algorithm for automatic root verb extraction. Weka was used for different classification algorithm usage. Java API has been used to call it recursively. PostgreSQL relational database has been used as knowledge base. Training set was prepared by the researchers without getting the test set. Model was generated then using that model and training set test set was evaluated. Here test set was generated from</s>
|
<s>the user query. At each stage the result was evaluated by the method of cross-validation. Then the predicted sentence was passed to the "extractor" program which with the help of knowledge base formed the answer. The same was returned to the user and his satisfaction was recorded to measure the performance of our system. WordNet has been used in the two stages, first to make our system understand the user query whenever required. Mostly when the meaning of any particular word is not known, our system tries to replace those words with the entries from word net. Secondly during the formation of answer to return the result set to the user, WordNet is being used again to form the accurate answer and also in case of ambiguity to give more than one context of answers. A Das and D Saha: Enhancing the Performance of Semantic Search in Bengali using Neural Net and other Classification Techniques Some of the indexing settings we have used in weka tools, used for classification are - FIGURE 4. Table1 : Different kind of suffices applied to the verb in Bengali with Tenses A. USED TRAINING SET It is used to train the system as well as it is used to evaluate the model of the classifier how well it classifies the training set itself. B. SUPPLIED TEST SET It is the test set which we actually want to classify. It is used to evaluate the predictive performance of the classifier. C. CROSS VALIDATION Cross validation is a technique to make an average of the test result. A dataset is split into X sets or pieces ("folds"). Then X-1 sets are used for training and remaining Xth set is used for testing. It gives X evaluation results and they are averaged. In case of 2 fold validation, the dataset is divided into d0 and d1, both of which are of equal size. We first train the system with d0 and test d1 then train the system with d1 and test d0. The result is then summed up and divided by 2.We have taken here 10 folds cross validation. D. PERCENTAGE SPLIT It mentions the portion of the data which is used as training and remaining is used as test data. Suppose percentage split is 70 percent and there are total 100 instances of the data. Then from 0th to 69th instances of the dataset is used as training and 70th to 99th dataset is used as test data. Random split is used with the help of seed value. E. OUTPUT MODEL Output model is generated based on training set. It can be visualized and verified. F. OUTPUT PER CLASS STATS For every class output this is the precision/recall and true/false statistics. G. OUTPUT CONFUSION MATRIX The confusion matrix is the one of the key metric to test the performance of the classifier. H. SCORE PREDICTION FOR VISUALIZATION The classifier’s predictions are remembered so that they can be visualized. I. RANDOM SEED FOR X VAL / PERCENTAGE</s>
|
<s>SPLIT This specifies the random seed used when randomizing the data before it is divided up for evaluation purposes. J. OUTPUT ENTROPY EVALUATION MEASURE Entropy evaluation measures are included in the output. K. OUTPUT PREDICTION The classifier’s predictions are remembered, so that they can be visualized. A Das and D Saha: Enhancing the Performance of Semantic Search in Bengali using Neural Net and other Classification Techniques FIGURE 5. Detailed Flowchart of the Methodology Used VI. RESULT A. RESULT SUMMARY 98 percent accuracy is achieved in the Root Verb Extraction by Das & Halder Algorithm [10]. Confusion matrix for four different classification methods is given in Table 2, Table 3, Table 4 and Table 5. A confusion matrix is a summary of prediction results on a classification problem. The number of correct and incorrect predictions are summarized with count values and broken down by each class. This is the key to the confusion matrix. The confusion matrix shows the ways in which classification model is confused when it makes predictions. It gives us insight not only into the errors being made by a classifier but more importantly the types of errors that are being made. Out of 250 questions for 244 questions the system has hit the correct sentence(s) where the answer is hidden giving 97.6 percent for hit success. Out of 250 questions for 214 questions the answers were given accurate by the system with 85.6 percent answer accuracy with moderate grammatical correctness. B. DETAILED RESULT i) Naïve Bayes Correctly Classified Instances -88 percent Incorrectly Classified Instances-12 percent Kappa statistic-0.1961 Mean absolute error-0.4664 Root mean squared error-0.5008 Relative absolute error-93.3564 percent Root relative squared error-100.1666 percent Total Number of Instances-100 So our model identifies 88 correctly Classified Instances and, 12 incorrectly classified instances. So the accuracy of the model is 88 percent and the inaccuracy of the model is 12 percent. Confusion matrix for Naive Bayes is given in table 1. ii) SVM (SMO) Correctly Classified Instances-86 percent Incorrectly Classified Instances-14 percent Kappa statistic-0.2834 Mean absolute error-0.36 Root mean squared error-0.6 Relative absolute error-72.0627 percent Root relative squared error-120.0079 percent Total Number of Instances-100 So our model identifies 86 correctly classified instances and 14 incorrectly classified Instances. The accuracy of the model is 86 percent and the inaccuracy of the model is 14 percent. Confusion matrix for SVM is given in table 2. iii) ANN Correctly Classified Instances-96 percent Incorrectly Classified Instances-4 percent Kappa statistic-0.1186 Mean absolute error-0.442 Root mean squared error-0.59 Relative absolute error-8.478 percent Root relative squared error-118.0054 percent Total Number of Instances-100 So our model identifies 96 Correctly Classified Instances and 4 incorrectly classified Instances. The accuracy of the model is 96 percent and inaccuracy of the model is 4 percent. Confusion matrix for SVM is given in table 3. iv) DECISION TREE Correctly Classified Instances-73 percent Incorrectly Classified Instances-27 percent Kappa statistic-0.4534 A Das and D Saha: Enhancing the Performance of Semantic Search in Bengali using Neural Net and other Classification Techniques Mean absolute error-0.3493 Root</s>
|
<s>mean squared error-0.4748 Relative absolute error-69.9227 percent Root relative squared error-94.9762 percent Total Number of Instances-100 So our model identifies 73 Correctly Classified Instances and 27 incorrectly classified Instances. The accuracy of the model is 73 percent and inaccuracy of the model is 27 percent. Confusion matrix for Decision Tree is given in table 4. VII. PERFORMANCE ANALYSIS Detailed analysis of result is A. Naive Bayes ===Runinformation=== Scheme:weka.classifiers.bayes.NaiveBayes Relation:weka.datagenerators.classifiers.classification. Instances:100 Attributes:10 class Node2 Node3 Node4 Node5 Node6 Node7 Node8 Node9 Node10 Test mode: split 66.0 percent train, remainder test === Classifier model (full training set) === Naïve_Bayes_Classifier Class Attribute Value1 Value2 (0.52) (0.48) ========================================== class Value1 Value2 [total] Node2 23.0 31.0 54.0 29.0 21.0 50.0 Value1 Value2 [total] Node3 33.0 21.0 54.0 27.0 23.0 50.0 Value1 Value2 [total] Node4 22.0 32.0 54.0 26.0 24.0 50.0 Value1 Value2 [total] 26.0 28.0 54.0 22.0 28.0 50.0 Node5 Value1 Value2 [total] Node6 27.0 27.0 54.0 22.0 28.0 50.0 Value1 Value2 [total] Node7 20.0 34.0 54.0 24.0 26.0 50.0 Value1 Value2 [total] Node8 27.0 27.0 54.0 39.0 11.0 50.0 Value1 Value2 [total] Node9 22.0 32.0 54.0 25.0 25.0 50.0 Value1 Value2 [total] 29.0 25.0 54.0 29.0 21.0 50.0 Time taken to build model: 0.05 seconds === Evaluation on test split === Time taken to test model on test split: 0.05 second B. SVM(SMO) === Run information === Scheme:weka.classifiers.functions.SMO Relation: weka.datagenerators.classifiers.classification. Instances: 100 Attributes: 10 class Node2 Node3 Node4 Node5 Node6 Node7 Node8 Node9 Node10 Test mode: split 66.0 percent train, remainder test === Classifier model (full training set) === SMO Kernel used: Linear Kernel: K(x,y) = <x,y> Classifier for classes: Value1, Value2 Binary SMO Machine linear: showing attribute weights, not support vectors. -0.5999 * (normalized) class=Value2 + -0.0002 * (normalized) Node2=Value2 + -0.6005 * (normalized) Node3=Value2 + 0.0004 * (normalized) Node4=Value2 + 0.5997 * (normalized) Node5=Value2 + 0.0003 * (normalized) Node6=Value2 A Das and D Saha: Enhancing the Performance of Semantic Search in Bengali using Neural Net and other Classification Techniques + -1.3999 * (normalized) Node7=Value2 + -0.5998 * (normalized) Node8=Value2 + 0.0005 * (normalized) Node9=Value2 + 0.9999 Number of kernel evaluations: 3284 (80.151 percent cached) Time taken to build model: 0.05 seconds === Evaluation on test split === Time taken to test model on test split: 0.06 second C. Multi Layer Perceptron (ANN) === Run information === Scheme:weka.classifiers.functions.MultilayerPerceptron Relation: weka.datagenerators.classifiers.classification. Instances: 100 Attributes: 10 class Node2 Node3 Node4 Node5 Node6 Node7 Node8 Node9 Node10 Test mode: split 86.0 percent train, remainder test === Classifier model (full training set) === Sigmoid Node 0 Inputs Weights Threshold -0.46337480146533117 Node 2 8.103745556620964 Node 3 -6.072875619653186 Node 4 6.372495140324609 Node 5 -3.9503308596408644 Node 6 -7.215969568907012 Sigmoid Node 1 Inputs Weights Threshold 0.4633740069800474 Node 2 -8.1034974021937 Node 3 6.072668908499916 Node 4 -6.372283394108063 Node 5 3.950231863927222 Node 6 7.215717271685786 Sigmoid Node 2 Inputs Weights Threshold 2.7683245856190135 Attrib class=Value2 0.8682762442270585 Attrib Node2=Value2 -4.431137559546951 Attrib Node3=Value2 3.783617770354907 Attrib Node4=Value2 -4.380158025756841 Attrib Node5=Value2 -6.569100948175245 Attrib Node6=Value2 1.2644761203361334 Attrib Node7=Value2 10.136894593720866 Attrib Node8=Value2 2.2222931644808117 Attrib Node9=Value2 -1.1961756750583683 Sigmoid Node 3 Inputs Weights Threshold</s>
|
<s>1.3449330093589562 Attrib class=Value2 -1.4873176018858711 Attrib Node2=Value2 -3.4738558540312514 Attrib Node3=Value2 -0.8903481429148983 Attrib Node4=Value2 -3.30524117606012 Attrib Node5=Value2 -1.9954254971320733 Attrib Node6=Value2 -5.043053347446121 Attrib Node7=Value2 -7.272891242499204 Attrib Node8=Value2 3.48425700470805 Attrib Node9=Value2 -5.422571254947003 Sigmoid Node 4 Inputs Weights Threshold 0.869193799614438 Attrib class=Value2 5.390911683768276 Attrib Node2=Value2 -0.7486372607912477 Attrib Node3=Value2 -6.390427323436479 Attrib Node4=Value2 -3.6222767041324264 Attrib Node5=Value2 -4.135964801658738 Attrib Node6=Value2 -5.691365511528745 Attrib Node7=Value2 2.727486953876179 Attrib Node8=Value2 0.6095482877748186 Attrib Node9=Value2 -1.7878361719212885 Sigmoid Node 5 Inputs Weights Threshold -3.764934366886755 Attrib class=Value2 0.8927457455455018 Attrib Node2=Value2 -1.9686974609437013 Attrib Node3=Value2 -2.295112500524242 Attrib Node4=Value2 -4.21750653575111 Attrib Node5=Value2 -3.2436685768106672 Attrib Node6=Value2 -2.333295775325579 Attrib Node7=Value2 3.217524336891349 Attrib Node8=Value2 -0.9017912544548131 Attrib Node9=Value2 0.7290063727690463 Sigmoid Node 6 Inputs Weights Threshold -4.36151269195419 Attrib class=Value2 -2.535835981153719 Attrib Node2=Value2 0.5990975373610684 Attrib Node3=Value2 -1.9828745134956305 Attrib Node4=Value2 -3.7293430187368592 Attrib Node5=Value2 -3.208820300338705 Attrib Node6=Value2 0.6159441682751483 Attrib Node7=Value2 7.332032494106006 Attrib Node8=Value2 -0.1418907230980749 Attrib Node9=Value2 -0.23822839374741406 Class Value1 Input Node 0 Class Value2 Input Node 1 Time taken to build model: 0.1 seconds === Evaluation on test split === Time taken to test model on test split: 0.05 seconds D. Decision Tree J48 A Das and D Saha: Enhancing the Performance of Semantic Search in Bengali using Neural Net and other Classification Techniques === Run information === Scheme:weka.classifiers.trees.J48 Relation: weka.datagenerators.classifiers.classification. Instances: 100 Attributes: 10 class Node2 Node3 Node4 Node5 Node6 Node7 Node8 Node9 Node10 Test mode: split 66.0 percent train, remainder test === Classifier model (full training set) === J48 pruned tree ------------------ Node7 = Value1 | Node5 = Value1 | | Node8 = Value1: Value1 (15.0/3.0) | | Node8 = Value2 | | | Node2 = Value1: Value1 (7.0/2.0) | | | Node2 = Value2: Value2 (12.0/2.0) | Node5 = Value2 | | Node8 = Value1: Value2 (15.0) | | Node8 = Value2 | | | Node4 = Value1 | | | | class = Value1: Value2 (3.0/1.0) | | | | class = Value2: Value1 (6.0/1.0) | | | Node4 = Value2: Value2 (6.0/1.0) Node7 = Value2 | Node4 = Value1 | | Node5 = Value1 | | | Node9 = Value1: Value2 (4.0) | | | Node9 = Value2: Value1 (4.0/1.0) | | Node5 = Value2 | | | class = Value1 | | | | Node2 = Value1: Value2 (2.0) | | | | Node2 = Value2: Value1 (3.0/1.0) | | | class = Value2: Value1 (11.0/1.0) | Node4 = Value2: Value1 (12.0/1.0) Number of Leaves : 13 Size of the tree : 25 Time taken to build model: 0.02 seconds === Evaluation on test split === Time taken to test model on test split: 0.01 seconds. VIII. APPLICATIONS Semantic Search improves the contextual meaning finding. When user is asking some question to get the answer, it plays a crucial role. Thus for automatic question answering system, semantic search is the backbone. For geographical map annotation semantic search is extensively used. The knowledge acquired from semantic search in text analytics are being used in bio informatics as well. It has a huge importance in Automatic Question Answering system, News Classification, Text Summarization, WordNet improvement, Sentiments Analysis. IX. SCOPE FOR IMPROVEMENTS Making the</s>
|
<s>Knowledge Base a self learning system is the next challenge. This is possible by the way of semi-supervised learning incorporating the human intelligence into the system. Where the output of the system will be verified by the human feedback and in the next iteration it will improve by cognition. The algorithms used, are completely language independent. The performance of the system has been tested in Indo Aryan Language groups. To test the same for other language groups are the next possible enhancements. X. CONCLUSION In this work, an attempt is made to design an effective algorithm for semantic search. There are major three tasks, first is to process the query and the second is to point the portion of the repository where the probable answer is hidden and the third method is to frame the answer from the pointed sentences. POS tagging, Root verb extraction, recursive classification to predict the portion of repository where probable answer is hidden and at the last stage, extracting the answer using knowledge base has been used. Artificial Neural Network, Naive Bayes, SVM (SMO) and Decision Tree have been used as statistical process to classify. Finally the knowledge base was used to form the answer and return to the user. At the initial stage, the testing of the performance of the system was done on the Indic Language dataset like Bengali. The accuracy was measured by the expert linguists. From the beginning the design and development of the system was carried out in such a fashion that it can be useful globally without any regional language constraint. The same has been tested at later stage with other languages also and the system is running perfectly well. Measuring the accuracy, precision for all the natural languages in the world is beyond our capability and it is expected that the researchers and the linguists of other language groups will use our algorithm and compare the performance with their own system. As of now no question answering system is available for Bengali. Most of them are available in English or popular European languages. So our research work is aimed particularly to cater the necessity of the mankind of that language group which has low resource, low popularity but this work also in general applicable to universally any language. A Das and D Saha: Enhancing the Performance of Semantic Search in Bengali using Neural Net and other Classification Techniques ACKNOWLEDGMENT We are grateful to LTRC group, IIIT Hyderabad for providing Shallow Parser for POS tagging. We are also thankful to Professor Niladri Sekhar Das of Indian Statistical Institute, Kolkata (ISI Kolkata) for providing the dataset. He being the renowned linguist also helped in evaluation. REFERENCES [1] Gu J., Huang R. and Meng L., “A Review on Semantic similarity Measure in Wordnet,” International Journal of Hybrid Information Technology, vol. 6, no. 1, pp. 502 – 505, 2013 [2] Iglesias A. and Zhu G, “Computing Semantic Similarity of Concepts in Knowledge graphs,” Transactions on Knowledge and Data Engineering, vol. 29, no.</s>
|
<s>1, pp. 273 – 275, 2018. [3] Duhan N., Nagpal CK, Katuria M. amd Payal, “Semantic similarity between terms for query suggestion,” in Proceedings of 5th ICRITO, vol. 2, no. 2, 2017, pp. 27 – 34. [4] Arijit Das and Diganta Saha, “Improvement of electronic governance and mobile governance in multilingual countries with digital etymology using sanskrit grammar,” in Proc. IEEE Region 10 Humanitarian Technology Conference (R10-HTC), Dhaka, Bangladesh, 2017, pp. 502 – 505. [5] Arijit Das, Tapas Halder and Diganta Saha, “Automatic extraction of Bengali root verbs using Paninian grammar” in Proc. 2nd IEEE International Conference on Recent Trends in Electronics, Information Communication Technology (RTEICT), Bangalore, India, 2017, pp. 953 – 956. [6] Eetu Makela, “Survey of semantic search research” presented in the seminar on knowledge management on the semantic web, Tenerife, Canary Islands, Spain, 2005. [7] Hai Dong and Farookh Khadeer Hussain, “Service-requester-centered service selection and ranking model for digital transportation ecosystems,” Computing, vol. 97, no. 1, 79-102, 2015. [8] Edy Portmann, “The FORA framework: a fuzzy grassroots ontology for online reputation management,” Springer, USA, 2012. [9] Hai Dong, “Semantic Search Engines and Related Technologies in: A Customized Semantic Service Retrieval Methodology for the Digital Ecosystems Environment,” PhD Thesis, Curtin University, p. 71-104, 2010. [10] Z.B. Wu and M. Palmer, “Verb Semantic and Lexical Selection,” in Proceedings of the 32nd Annual Meeting of the Association for Computational Linguistics, Las Cruces, USA: ACL, 1998, pp. 133-138. [11] C. Leacock and M. Chodorow, “Combining Local Context and WordNet Similarity for Word Sense Identification,” WordNet: An Electronic Lexical Database, vol. 49, no. 2, pp. 265-283, 1998. [12] A.Tversky, “Features of Similarity,” Psychological Review, vol. 84, no. 4, pp. 327-352, 1977. [13] P.Resnik, “Using Information Content to Evaluate Semantic Similarity in a Taxonomy,” in Proc.14th International Joint Conference Artificial Intelligence, vol. 2, no. 1, 1995, pp. 51-79. [14] Lakshay Sahni, Anubhav Sehgal, Shaivi Kochar, Faiyaz Ahmad and Tanvir Ahmad, “A Novel Approach to Find Semantic Similarity Measure between Words,” in Proc. IEEE 2nd International Symposium on Computational and Business Intelligence, 2014, pp 89-92. [15] Lakshay Sahni, Anubhav Sehgal, Shaivi Kochar, Faiyaz Ahmad and Tanvir Ahmad, “A Novel Approach to Find Semantic Similarity Measure between Words,” in Proc. IEEE 2nd International Symposium on Computational and Business Intelligence, 2014, pp 89-92. [16] Yunzhi Jin, Hua Zhou, Hongji Yang, Yong Shen, Zhongwen Xie, Yong Yu and Feilu Hang, “An Approach to Measuring Semantic Similarity and Relatedness between Concepts in An Ontology,” in Proc. 23rd International conference on Automation and Computing(ICAC), 2017,pp 1-6. [17] Bo Zhu, Xin Li and Jesus Bobadilla Sancho, “A Novel Asymmetric Semantic Similarity Measurement for Semantic Job Matching,” in Proc. International Conference on Security, Pattern Analysis and Cybernetics(SPAC), 2017,pp152-157. [18] M. F. Mridha, A. K. Saha and J.K. Das, “New approach of solving semantic ambiguity problem of bangla root words using universal networking language (UNL),” in Proc. 3rd International Conference on Informatics, Electronics and Vision, Dhaka, Banladesh, 2014, pp. 201-210. [19] M. Choudhury, V. Jalan, S. Sarkar and A. Basu, “Evolution, optimization</s>
|
<s>and language change: The case of Bengali verb inflections,” in proceedings of ninth meeting of the ACL special interest group in computational morphology and phonology, Prague, 2007, pp.65-74. [20] M. S. Islam, “Research on Bangla language processing in Bangladesh: progress and challenges,” in Proc. 8th international language and development conference, Bangladesh, 2009, pp.23-25. [21] Asif Ekbal, Rejwanul Haque and Sivaji Bandyopadhyay, “Maximum entropy based Bengali part of speech tagging in: Advances in natural language processing and applications research,” in computing science, 2008, pp.67- 78. AUTHORS PROFILE ARIJIT DAS received B.Tech. degree in Computer Science and Engineering in 2011 from Govt. College of Engineering and M.E. in Computer Science and Engineering in 2013 from Jadavpur University, Kolkata, India with GATE fellowship. Then he joined as Scientific Officer in the Ministry of IT, Govt. of India. Currently he is pursuing PhD (Engg.) in Jadavpur University. He became the member of IEEE in 2016. DIGANTA SAHA is currently working as Professor in the Department of Computer Science and Engineering in Jadavpur University. He works in the field of Natural Language Processing. A Das and D Saha: Enhancing the Performance of Semantic Search in Bengali using Neural Net and other Classification Techniques AUTHORS PROFILE</s>
|
<s>An Approach Towards Multilingual Translation By Semantic-Based Verb Identification And Root Word AnalysisAn Approach Towards Multilingual Translation BySemantic-Based Verb Identification And Root WordAnalysisMd. Saidul Hoque AnikDepartment of CSE, BUETDhaka, Bangladesh.onix.hoque@gmail.comMd. Adnanul IslamDepartment of CSE, BUETDhaka, Bangladesh.islamadnan2265@gmail.comA. B. M. Alim Al IslamDepartment of CSE, BUETDhaka, Bangladesh.alim razi@cse.buet.ac.bdAbstract—Popular and widely available translators like GoogleTranslator uses statistic based approach to build the multilingualtranslation system. This approach solely depends on the avail-ability of a large number of samples. Which is why, Googletranslator performs interestingly well when it translates amongthe popular languages like English, French or Spanish, however,makes elementary mistakes when it translates the languages thatare newly introduced or less known to the system. Most of theresearch found so far on natural language processing (NLP), havebeen performed keeping English as the target language. However,a good number of widely spoken potential languages remainnearly unexplored in the research fields which is quite unexpectedin the era of global communication. In this study, we have triedto explore a generalized machine translation system, especiallyfor the languages having insufficient availability in literature.This study basically focuses on Bengali Language as an exampleof such low resource languages. In this work, we have proposeddifferent approaches for semantic based verb identification alongwith its translation, and hence, developed an algorithm for rootword detection of a verb in any sentence which reflects significantimprovement over Google Translator. Finally, we have shown acomparison among the different approaches in terms of accuracy,time complexity and space complexity.Keywords—NLP, OpenNLP, Levenshtein, Wordnet, EBMT,SDLI. INTRODUCTIONHuman beings have been communicating with various spo-ken languages since their earliest days on the Earth. Humanlanguages can express thoughts on an unlimited number oftopics e.g., the weather, the past, the future, gossip, etc. Everyhuman language has a vocabulary consisting of hundreds ofthousands of words which is initially built up from severaldozen speech sounds. More remarkable point here to be notedis that every normal human child basically learns the wholesystem just from hearing others use it.While many believe that the number of languages in theworld is about 6500, there are actually 7097 living languagesin the world [19]. Although this number might be the latestcount, there is no one clear answer as to the exact number oflanguages that still exist. One statistics tells us that about 230languages are spoken in Europe, whereas over 2000 languagesare spoken across Asia. In the era of globalization, peopleoften need to communicate in more than one language. Asit is quite tough to learn and track multiple languages fora single person, importance of machine translation follows.Machine translation has emerged as one of the top valuabletechnologies for localization and arguably even for globaleconomies. It works reasonably well for most of the highlypopular languages like English, French, Spanish, etc.Bengali is considered as one of the low-resource languagesfor machine translation as it lacks different language resourceslike electronic texts and parallel corpus. Around 38% ofBengali speaking people are monolingual. Since significanceof learning English is unavoidable at present, it is important tohave a well developed Bengali to English translation system.Not only Bengali-English pair but also there are enormousnumbers of different language pairs</s>
|
<s>which thrive for a trans-lation learning mechanism of their own like, Bengali-Arabic,Hindi-Bengali, Arabic-English, Arabic-Spanish, etc.In this study, we take Bengali to English translation systemas an example to propose a generalised skeleton for multi-lingual translation system. The main focus of this work isverb identification and optimization techniques using semanticanalysis.II. MOTIVATIONNatural languages like English, Spanish, and even Hindi arerapidly progressing in processing by machines. While progresshas been made in language translation software and alliedtechnologies, the primary language of the ubiquitous and allinfluential World Wide Web is English. English is typicallythe language of latest-version applications and programs andnew freeware, manuals, shareware, peer-to-peer, social medianetworks and websites. However, Bengali, being among thetop ten languages in the world, lags behind in some crucialareas of research like parts of speech tagging, informationretrieval from texts, text categorization, and most importantly,in the area of syntax and semantic checking [1].Now-a-days, Google translator is one of the pioneer applica-tions supporting a number of languages to translate from oneto another. Although it works successfully for many languages,it is still in developing phase for Bengali to English translation.Google translator fails to detect the verbs in a sentence accu-rately. More importantly, it can not always retrieve necessary978-1-7281-1325-8/18/$31.00 ©2018 IEEEFig. 1. Faulty translations of Google Translateinformation like, person and number of the subject, tense ofthe verb, etc. correctly which are the pillars of a successfultranslation. Therefore, the resulting translation becomes faultyfor a large set of sentences. Fig. 1 shows some examples offaulty translations by Google Translator for Bengali-Englishlanguage pair. The correct translations of these sentences arerespectively-• You will eat rice• I have eaten rice• You ate riceIf we notice carefully, the source of these faults is mainlythe misleading verbs since the detection of the tense fromthem is incorrect. This problem leads to faulty translationsfor a significant number of input sentences since the problemrelates to the basic skeleton of a sentence construction. Thecorrections have been achieved in our proposed system bysemantic analysis of the verbs which can be visualised inFig. 16, discussed later in this paper. The other translatorse.g., Bing, Yahoo Babel Fish, Systran Language Translation,SDL Free Translation, etc., cannot support Bengali and manyother widely used languages like Bengali, Arabic, etc.Therefore, the motive of our research is not only to ef-ficiently translate one language to other using a generalisedtranslation skeleton but also to teach the translation mechanismstep by step. In this study, we mainly focus on the detectionand the learning of the verbs semantically as semantic basedverb identification and optimization is an attempt towardsachieving that goal. We also show a comparative analysis ofthe results with Google Translator to point out the improve-ments achieved by our proposed mechanism.III. RELATED WORKBengali, being the native language of about 243 millionpeople [20], still lacks significant research in the area ofnatural language processing. Bangla to English translation wasfirst proposed by Sk. Borhan Uddin, Dr. Md. Fokhray Hossainand Kamanashis Biswas using opennlp (OpenNLP) tool. Theyproposed a simple technique for synthesizing Bengali words.However, they used opennlp tool after translating the Bengaliword to corresponding English word which caused erroneousParts Of Speech (POS) tagging</s>
|
<s>for different words and gen-erated wrong outputs for very simple sentences.Kim et al., [4] used syntactic chunks as units of translationfor improving insertion or deletion of words between twodistant languages. However, an example base with alignedchunks in both source and target language is missing in thisapproach.Saha et al., [12] reported an EBMT (Example BasedMachine Translation) for the translation of differenet newsheadlines. The work showed that EBMT can be a positiveapproach for Bengali language. However, their approach reliedmostly on news headlines. Moreover, Gangadharaiah et al.,[3] proposed that templates can be useful for EBMT to obtainlonger phrasal matches if coordinated with statistical decoders.His study showed that it is a time consuming task to clusterthe words manually and would be less time consuming to usestandard available resources such as, WordNet for clustering.Dasgupta et al., [6] proposed to use syntactic transfer.They converted CNF (Chomsky Normal Form) trees to normalparse trees and using a bilingual dictionary, generated outputtranslation. However, this research did not consider translatingthe unknown words which did not appear in the bilingualdictionary.IV. PROPOSED MECHANISMFig. 2. Translation MethodologyOur proposed mechanism involves storing the gist or theconcept of a sentence in a structure using semantic analysis.A simple sentence can be basically broken down into itssubject, verb and object in any order, corresponding to thelanguage of the sentence. Each of them may have their ownattributes such as, number, person, tense, etc. In addition, theoverall sentence can have different modes e.g., negative form,interrogative form, etc. Fig. 2 shows the sequence of steps fortranslation.During translation from one language to another, it is pos-sible that direct translation of a word in destination languageis not available. As our targeted translator system will containa number of languages, we can use a chain of intermediatelanguage translations to reach the destination language. Fig. 3illustrates the process of this method.Fig. 3. Translation using intermediate languagesFor example, we want to translate ‘word1’ from Bengalito English. When we look up on the vocabulary of thetranslator, we see that ‘word1’ does not exist in Bengali-English vocabulary. However, the word might be available onBengali-Arabic vocabulary of the translator and the Arabictranslation for ‘word1’ can be ‘word1-Arabic’. Now in theArabic-English vocabulary, if ‘word1-Arabic’ is also avail-able then we find that in English, ‘word1-Arabic’ stands for‘word1-English’. Hence for the Bengali word ‘word1’, theappropriate English translation ‘word1-English’ is found bythe proposed translator.The proposed methodology discussed so far, may be ap-propriate for generic word translation only. However, thisapproach cannot be directly applied in translating verbs as theymay appear in different forms depending on the tense of thesentence, number and person of the subject, etc. The scenariobecomes more complex when some suffixes or prefixes areassimilated into the verbs. This is common in some languagessuch as Bengali or Arabic where a root form of verb changesinto different modes depending on the subject and the tenseof a sentence. In this scenario, a simple look-up table (forvocabulary) is not good enough for verb translation.V. VERB IDENTIFICATION & TRANSLATIONMETHODOLOGYIn our proposed translation system, the Bengali verbs arestored in a table along with the other words as a</s>
|
<s>part of vocabu-lary for each language. However, we need to keep in mind thatone verb may have multiple representations based on tense andsubject of a sentence as shown in Fig. 4. The figure shows anexample of different forms taken by each of the two differentverbs, ‘eat’ and ‘play’ in Bengali. We have implemented threeFig. 4. Multiple forms of verbs in Bengalidifferent approaches for translating the verbs efficiently whichgive us different results on performance and accuracy. Oneapproach improves over another sequentially. After discussingthem, we will show a comparative evaluation of these threedifferent approaches.A. Naive Approach: Gigantic DatabaseThis is the simplest approach (approach 1) to implement.Like all other words (nouns, pronouns, etc.), we simplyinsert all the different forms of a standard verb with theirstandard translation as separate entries in the database tablefor vocabulary. The following figure (Fig. 5) illustrates howmultiple entries for a standard verb are incorporated in thedatabase as vocabulary.Fig. 5. Database table for translating verbs having different formsUsing this table we can find the standard translated verb(eat, go, play, etc.) which is then modified according to thetense and subject of the sentence applying semantic analysis(reported in our previous work, [1]). Here, we have shown anexample of a translated verb ‘eat’. Now, it is processed basedon the semantic analysis of the sentence e.g., is eating, ate,has eaten, etc. This approach guarantees 100 percent accuracyin terms of translating verbs. However, memory consumptionbecomes a major issue due to the repetitive insertions of onestandard verb in various forms. We will come back to thispoint later with some statistical measures.B. Optimized Database with Semantic AnalysisWe propose our next approach (approach 2) that reflectsan immediate improvement over our previous approach. Asdiscussed earlier, if we need to store the word translation foreach form of the same verb then the database will becomevery large due to the repetitive insertions which leads to amassive memory consumption. However, we can avoid themultiple insertions of the same verb having different formsusing this approach, database optimization technique withsemantic analysis. We can store only the standard verb inFig. 6. Database optimization using semantic analysis on verbsthe vocabulary table and apply semantic analysis to detect thestandard form from the other forms of the verb dependingon number, person and tense as shown in Fig. 6. The figureshows how one word (standard verb) can take different formsand suggests insertion of only that particular standard wordin the database table for vocabulary, not all of its differentforms. This will avoid multiple insertions in the database forthe same verb with multiple forms.However, to detect that standard verb from its other differentforms, we needed to concatenate all the different forms of theverb as a single large string and inserted it into another tablewith its standard form as a single entry as shown in Fig. 7.Fig. 7. Mapping between non-standard forms and standard form of verbAlthough this approach improves the searching time andavoids overheads for multiple entries significantly, it offers nosignificant improvement in terms of overall memory requiredfor actual data since all the forms of a standard verb isultimately saved in</s>
|
<s>database as a single string.C. Levenshtein DistanceOur latest approach (approach 3) emerges from the demandof sustainability to ensure green computation in terms of boththe space complexity and the time complexity. The translationof a verb can be done using hash table which is implementedin this approach.The key-value pair consists of only the standard form ofverbs in two language. In order to effectively translate, weneed to find a way to recognize the standard form of Verb fromits non-standard form. For this purpose, we shall be using amodified version of a popular string similarity measurementalgorithm, known as Levenshtein distance.Levenshtein distance is the measurement to find out theminimum number of operations that are required to convertsource string into destination string. The commonly usedoperations are:• Insertion (of letter in source string)• Deletion (of letter from source string)• Substitution (of a letter with another letter in sourcestring)Each of these operations is associated with a cost. Wheneverany operation is performed upon the source string, correspond-ing cost is taken into account. Higher cost refers to higherdissimilarity between the source and the destination string.D. Modified Levenshtein DistanceIn a standard Levenshtein Distance algorithm, each of theoperations has unit cost. We have modified the cost of theseoperations carefully with an algorithm to identify the root verbfrom a non-standard form of verb. A non-standard form of verbmay have prefix and suffix assimilated into it based on tenseand subject. Instead of directly trying to match a non-standardform of verb with a standard form, we are going to breakdown the non-standard form into its root word, and then tryto match the root word with its standard form. The processcan be visualized in Fig. 8.Fig. 8. Verb translation using modified Levenshtein distance algorithmIn order to convert into the root-verb, we need to cut downletters or characters from the non-standard form. So if we aredeleting letters from the source, probably we are getting closerto the root word. This is why we have assigned the deletioncost to zero.On the other hand, the set of letters in root-verb is almostalways a subset of the letters in the non-standard form ofthat verb. An example is shown in Fig. 9. Therefore in theFig. 9. Root-verb as a subset of the non-standard verb formsprocess of modification, if we are introducing a new letterusing insertion or substitution operation, we are most-likelydeviating away from the destination root-verb. For this reason,the algorithm is modified in a way that introduction of a newletter or character penalizes the overall cost. We have denotedthis cost as ‘Significant Cost’.In order to improve the root-verb identification, we haveintroduced another concept into this algorithm. It is often seenthat the root word contains a new vowel that is differentfrom the non-standard form of that verb. To facilitate thisprocess, we have considered insertion of these vowels asinsignificant, and denoted the associated cost as ‘InsignificantCost’. It improves the accuracy of root-word identification forlanguages such as Bengali or Arabic.Algorithm 1 Get the root word given other form of verbprocedure GETROOTWORDInput : ModVerbFrom← The modified verb fromOutput : Root word of ModVerbFromroot list← List of</s>
|
<s>root wordsmatched root← root list[0]min dist← GetDistance(ModV erbFrom, root list[0])for root in root list dotemp← GetDistance(ModV erbFrom, root)if temp < min dist thenmin dist = tempmatched root = rootreturn matched rootAlgorithm 1, 2, and 3 demonstrates the complete algorithmof the modified Levenshtein distance calculation. First, froma given list of root words and a verb form, our system findsthe root word that is closed to the given verb form using the‘GetDistance’ procedure from Algorithm 1. Then Algorithm2 calculates the ‘Min Distance’ to convert the verb form intothe root word by inserting, deleting or replacing the characters.In case of insertion or replacement, it is considered whetherthe newly introduced character is an insignificant characterAlgorithm 2 Measure the weighted distance between a rootword and another verb formprocedure GETDISTANCEInput : lhs, rhs← Two character sequencesOutput : Cost difference between lhs and rhslen lhs← (Length of character sequence lhs) + 1len rhs← (Length of character sequence rhs) + 1cost← Array of size len lhsnew cost← Array of size len rhsfor i := 0 to i < len lhs step 1 docost[i] := ifor j := 1 to j < len rhs step 1 donew cost[0] := jfor i := 1 to i < len lhs step 1 doif lhs[i− 1] = rhs[j − 1] then match← 0else match ← GetCost(lhs[i − 1]) +GetCost(rhs[i− 1])cost replace := cost[i− 1] +matchcost insert := cost[i] +GetCost(rhs[j − 1])cost delete := new cost[i− 1]new cost[i] :=Min(cost replace, cost insert, cost delete)Swap(cost, new cost) //Swap the two arrays aftereach inner loopreturn cost[len lhs− 1]Algorithm 3 Get Costprocedure GETCOSTInput : c← Character whose cost is to be calculatedOutput : cost← Cost of the characterSignificantCost← Significant change weightInsignificantCost← Insignificant change weightinsignificant change list←List of insignificant charactersif c is in insignificant change list then returnInsignificantCostreturn SignificantCost(trivial characters like vowels that do not change the meaningof the verb significantly) or not. The comparison is done usingthe ‘GetCost’ procedure. Finally, procedure ‘GetCost’ returns‘InsignificantCost’ if the input character is not significant,otherwise, returns ‘SignificantCost’ as shown in Algorithm 3.E. Finding Standard-form of VerbOur proposed translation system contains tables containingroot verbs that are pointing towards their respective standard-form of verb. Using the algorithm demonstrated in the previoussection, we shall be able to match the non-standard formof verb with the nearest matching root-verb. The root-verbsare mapped to standard verb forms using a hash-map. Inour system, it is also possible that a single verb form mayFig. 10. Finding the standard form of verbcome from multiple root-verb. The complete process can bevisualized in Fig. 10.VI. EXPERIMENTAL EVALUATIONThe parameters and settings that were used to carry out isdiscussed in the following subsections.A. Tools and SettingsThe proposed algorithm was implemented in Java Language,and was used to translate verbs from Bengali to English.Several forms of verbs were tested using the program. Forexperimentation, we used the following features in our imple-mented system:• Language : JAVA• Platform/ IDE : Netbeans• Database : Sqlite• Tool : Opennlp toolsA major issue arose while taking input and parsing Bengalitexts in java. We set up text encoding to UTF-8 and alsochanged</s>
|
<s>the font settings and some other settings to worksuccessfully with Bengali texts in netbeans.We used Sqlite with java for database in our system. Sincewe had to create a Bengali to English dictionary, we neededa database to retrieve the word translations. Therefore, weinstalled Sqlite and aso added a jar file for Sqlite in our project.To calculate the Levenshtein distance, the values of signif-icant cost and insignificant cost were assigned two and zerorespectively.B. ResultAll the approaches gradually improve one over another.Specially, the approach of Levenshtein distance calculationshows significant improvement in terms of both the space andFig. 11. Output of modified Levenshtein distance algorithmthe time complexity. This algorithm was applied on severalBengali verbs to get the root words (verbs). The root wordswere then mapped to the standard form. The output generatedby implementing this algorithm is summarized in Fig. 11.First, we obtain the root-verb by calculating the Levenshteindistance accordingly. In the mean time, we can also identifythe tense by extracting the suffixes from the verbs. Then aftermapping the root-verb(s) to the standard form, we retrieve theraw translation of the verb. However, the detection of rootverb by calculating the Levenshtein distance can be incorrectfor some forms of verbs which can lead to wrong translationof the verb completely. In Fig. 11, we can notice one suchfaulty case where the verb has been erroneously translated to‘eat’ in place of ‘play’. Fortunately, such erroneous cases havebeen handled successfully by slight preprocessing of the verbs,discussed in the next section. Nevertheless, we finally translatethe verb by modifying its raw translated form after gatheringother relevant information (POS tagging, person, number, etc.)from the input sentence as shown in Fig. 12.Fig. 12. Root-verb identification and translationThen we generate the translation of the input sentence byapplying necessary grammatical rules of the target language.The details of the complete translation mechanism has beendiscussed in our previous work [1].C. FindingsFrom the experimental result of approach 3, we can see thatour algorithm is able to detect almost all of the root wordssuccessfully with some exceptions (Fig. 11). Afterwards, wefound that if the Levenshtein distance can be calculated afterpreprocessing carefully the non-standard verbs a little. Afterremoving the common suffixes which add to the verbs due todifferent tenses, we can get an optimized verb closer to theroot-verb. It speeds up the Levenshtein distance calculationand also offers better accuracy in detecting the correct root-word.Fig. 15 shows the improvement achieved (inside green box)due to the slight preprocessing of the verbs before Levenshteindistance calculation. It eliminates the incorrect detection ofroot verb shown earlier (in Figure 14) and ensures the correctroot word detection for almost all the possible cases. TheFig. 13. Comparison of the proposed approaches in terms of memoryconsumptionFig. 14. Comparison of the proposed approaches in terms of memoryconsumption (larger number of verbs)figure illustrates how the addition of the ‘Suffix ReducedForm’ improves the accuracy of the modified Levenshteindistance algorithm.We found that both approach 1 and approach 2 generatealmost 100 percent accurate result in translating differentforms of verbs. Modified Levenshtein distance calculationapproach generates around 90 percent accurate result whichhas been subsequently increased to almost</s>
|
<s>98-99 percent inthe improved version of the algorithm (preprocessing of verbsbefore distance calculation).However, the most statistical comparison among the threeapproaches can be shown in terms of space complexity. Fig. 13and Fig. 14 graphically shows a comparative evaluation ofthese approaches for different number of verbs. Here inFig. 14, we can see that approach 3 improves over approach 2Fig. 15. Improvement over modified Levenshtein distance algorithm due to preprocessing of the verbsby saving around 130kB of memory. However, we have shownthe result for only 1000 verbs from a single language. Ourproposed translator should deal with hundreds of languagescontaining millions of verbs in each language. Consideringthis, the improvement achieved in terms of memory consump-tion can be around 130KB*100*1000=13GB keeping in mindthat there are also other words in the vocabulary other thanverbs which is significant for the mobile devices specially.Now, we would like to show how our proposed translationsystem is improving over the most widely used translator,Google Translate, in Fig. 16. We carefully designed a datasetfor Bangla-English translation so that more focus requires inthe verbs in the input sentences. The figure illustrates someexamples from a large dataset that how our proposed systemachieves improvement over Google Translator by identifyingthe root-verbs efficiently.It shows that Google Translator fails to identify the correctroot-verb including the tense of the sentence (red colouredwords) which leads to the incorrect translation for even dif-ferent simple sentences. However, the accurate translations ofthose sentences have been generated by our system.VII. FUTURE WORKOne of the main challenges in Bengali to English textconversion remains in implementing its vast grammatical rules.If we can track the core rules to acquire a generalized formatfor all rules and exceptions then the translation task will besimpler and compact. Developing Opennlp tools for parts ofFig. 16. Improvement of our proposed translator over Google Translatespeech tagging of Bengali words in a sentence efficiently isone of the most crucial tasks in Bengali to English translation.We aim to extend our work on developing Opennlp tools forBengali language.There is a great deal of research opportunities in languageprocessing. Grammars keep changing as the language buildsits grammar. Therefore, we need to find a translation process toupdate new sentence making rules anytime. Machine Learningusing Statistical Machine Translation can be one way toachieve it. We plan to experiment with it.Besides, our initial motive was to build a translation modelfor Bengali to Arabic conversion. However due to lack ofproficiency in Arabic language, we had to start with conversionto English language. Therefore, we want to implement ourproposed generalised translation skeleton for Arabic languagesoon so that we can help a large group of people to learn andunderstand Arabic language.VIII. CONCLUSIONIn the era of technology and global communication, peo-ple generally thrive not only for translations between twolanguages but also for learning multiple languages equaleffectively for sustaining. However, very general or primarygrammatical rules of any language usually consist of a goodnumber of exceptions. Keeping track of the wide varieties ofpossible cases is one of the most common features of a multi-lingual translation system which is a difficult task even for themost intelligent beings. Therefore, it</s>
|
<s>massively demands wideand complex application of Artificial Intelligence to build upa near-accurate translator which on the contrary, may resultin the degradation of the overall performance of the systemdrastically.Our system currently focuses on Bengali to English transla-tion. However, it has limited knowledge base and vocabularytill now. By increasing the vocabulary and the knowledge basewe can improve its efficiency by testing over wide range ofdifferent cases for general purpose use. Considering the limita-tions of a machine translator, our preference is always towardsmaking the learning of a language easier by implementing andteaching all the basic and necessary translation processes stepby step.REFERENCES[1] M. Islam and A. Islam, Polygot: Going Beyond Database Driven AndSyntax-based Translation, ACM DEV ’16: Proceedings of the 7thAnnual Symposium on Computing for Development, November 2016.[2] Z. Anwar, Developing a Bangla to English Machine Translation SystemUsing Parts Of Speech Tagging: A Review, Vol. 1. No. 1, Journal ofModern Science and Technology, May 2013.[3] R. Gangadharaiah, R. D. Brown, and J. G. Carbonell., Phrasal equiv-alence classes for generalized corpusbased machine translation. InAlexander Gelbukh, editor, Computational Linguistics and IntelligentText Processing, volume 6609 of Lecture Notes inComputer Science,pages 1328. Springer Berlin / Heidelberg, 2011.[4] S. Raphael, J. D. Kim, R. D. Brown, J. G. Carbonell, Chunk-BasedEBMT. EAMT, 2010.[5] M. Roy, A Semi-supervised Approach to Bengali-English Phrase-BasedStatistical Machine Translation, Proceedings of the 22nd CanadianConference on Artificial Intelligence, 2009.[6] S. Dasgupta, A. Wasif, and S. Azam, An Optimal Way Towards MachineTranslation from English to Bengali, Proceedings of the 7th InternationalConference on Computer and Information Technology (ICCIT), 2004.[7] M. Anwar and M. Bhuiyan, Syntax Analysis and Machine Translationof Bangla Sentences, International Journal of Computer Science andNetwork Security, 09(08),317326; 2009.[8] Sk. B. Uddin, Bangla to English Text Conversion using opennlp Tools;Daffodil International University Journal Of Science & Technology, Vol.8, Issue 1, JANUARY 2013 .[9] M. Snover, B. Dorr, R. Schwartz, L. Micciulla, and J. Makhou, A Studyof Translation Edit Rate with Targeted Human Annotation, Proceedingsof Association for Machine Translation in the Americas, 2006.[10] S.p K. Naskar and S. Bandyopadhyay, A Phrasal EBMT Systemfor Translating English to Bengali, Proceedings of the Workshop onLanguage, Artificial Intelligence, and Computer Science for NaturalLanguage Processing Applications (LAICSNLP), 2006.[11] D. Saha, S. K. Naskar, S. Bandyopadhyay, A Semantics-based English-Bengali EBMT System for translating News Headlines, MT Xummit,2005.[12] G. Doddington, Automatic Evaluation of Machine Translation QualityUsing N-gram CoOccurrence Statistics, Proceedings of the secondinternational conference on Human Language Technology Research,2002.[13] N. Karamat, Verb Transfer For English To Urdu Machine Translation,FAST-Lahore, 2006[14] N. Chatterjee, S. Goyal, A. Naithani, Resolving Pattern Ambiguityfor English to Hindi Machine Translation Using WordNet, Departmentof Mathematics, Indian Institute of Technology Delhi, Published inWorkshop Modern Approaches in Translation Technologies, Borovets,Bulgaria, 2005.[15] Example Based English to Bengali Machine Translation Thesis work ofKhan Md. Anwarus Salam completed in August 2009.[16] J. Tiedemann and L. Nygard, The OPUS corpus - parallel and free,Proceedings of LREC, 2004.[17] D. Melamed, A Geometric Approach to Mapping Bitext Correspon-dence, Proceedings of the First Conference on Empirical Methods inNatural Language Processing (EMNLP), 1996.[18] https://www.ethnologue.com/guides/how-many-languages[19] https://en.wikipedia.org/wiki/World language</s>
|
<s>See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/329758886Document Concept Hierarchy Generation by Extracting Semantic Tree UsingKnowledge GraphConference Paper · December 2018DOI: 10.1109/WIECON-ECE.2018.8783083CITATIONSREADS1402 authors:Some of the authors of this publication are also working on these related projects:Wikipedia Entry Augmentation View projectInterfacing & Database Management Systems lab View projectSanjida Nasreen TumpaThe University of Calgary21 PUBLICATIONS 13 CITATIONS SEE PROFILEMuhammad Masroor AliBangladesh University of Engineering and Technology19 PUBLICATIONS 60 CITATIONS SEE PROFILEAll content following this page was uploaded by Sanjida Nasreen Tumpa on 28 December 2018.The user has requested enhancement of the downloaded file.https://www.researchgate.net/publication/329758886_Document_Concept_Hierarchy_Generation_by_Extracting_Semantic_Tree_Using_Knowledge_Graph?enrichId=rgreq-a0962a018d84c4e2f1b2ea5cea511c7f-XXX&enrichSource=Y292ZXJQYWdlOzMyOTc1ODg4NjtBUzo3MDg3MTk5MjI5MjE0NzJAMTU0NTk4MzQwNTM4NQ%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/329758886_Document_Concept_Hierarchy_Generation_by_Extracting_Semantic_Tree_Using_Knowledge_Graph?enrichId=rgreq-a0962a018d84c4e2f1b2ea5cea511c7f-XXX&enrichSource=Y292ZXJQYWdlOzMyOTc1ODg4NjtBUzo3MDg3MTk5MjI5MjE0NzJAMTU0NTk4MzQwNTM4NQ%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Wikipedia-Entry-Augmentation?enrichId=rgreq-a0962a018d84c4e2f1b2ea5cea511c7f-XXX&enrichSource=Y292ZXJQYWdlOzMyOTc1ODg4NjtBUzo3MDg3MTk5MjI5MjE0NzJAMTU0NTk4MzQwNTM4NQ%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Interfacing-Database-Management-Systems-lab?enrichId=rgreq-a0962a018d84c4e2f1b2ea5cea511c7f-XXX&enrichSource=Y292ZXJQYWdlOzMyOTc1ODg4NjtBUzo3MDg3MTk5MjI5MjE0NzJAMTU0NTk4MzQwNTM4NQ%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-a0962a018d84c4e2f1b2ea5cea511c7f-XXX&enrichSource=Y292ZXJQYWdlOzMyOTc1ODg4NjtBUzo3MDg3MTk5MjI5MjE0NzJAMTU0NTk4MzQwNTM4NQ%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Sanjida_Nasreen_Tumpa?enrichId=rgreq-a0962a018d84c4e2f1b2ea5cea511c7f-XXX&enrichSource=Y292ZXJQYWdlOzMyOTc1ODg4NjtBUzo3MDg3MTk5MjI5MjE0NzJAMTU0NTk4MzQwNTM4NQ%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Sanjida_Nasreen_Tumpa?enrichId=rgreq-a0962a018d84c4e2f1b2ea5cea511c7f-XXX&enrichSource=Y292ZXJQYWdlOzMyOTc1ODg4NjtBUzo3MDg3MTk5MjI5MjE0NzJAMTU0NTk4MzQwNTM4NQ%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/The_University_of_Calgary?enrichId=rgreq-a0962a018d84c4e2f1b2ea5cea511c7f-XXX&enrichSource=Y292ZXJQYWdlOzMyOTc1ODg4NjtBUzo3MDg3MTk5MjI5MjE0NzJAMTU0NTk4MzQwNTM4NQ%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Sanjida_Nasreen_Tumpa?enrichId=rgreq-a0962a018d84c4e2f1b2ea5cea511c7f-XXX&enrichSource=Y292ZXJQYWdlOzMyOTc1ODg4NjtBUzo3MDg3MTk5MjI5MjE0NzJAMTU0NTk4MzQwNTM4NQ%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Muhammad_Ali125?enrichId=rgreq-a0962a018d84c4e2f1b2ea5cea511c7f-XXX&enrichSource=Y292ZXJQYWdlOzMyOTc1ODg4NjtBUzo3MDg3MTk5MjI5MjE0NzJAMTU0NTk4MzQwNTM4NQ%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Muhammad_Ali125?enrichId=rgreq-a0962a018d84c4e2f1b2ea5cea511c7f-XXX&enrichSource=Y292ZXJQYWdlOzMyOTc1ODg4NjtBUzo3MDg3MTk5MjI5MjE0NzJAMTU0NTk4MzQwNTM4NQ%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Bangladesh_University_of_Engineering_and_Technology?enrichId=rgreq-a0962a018d84c4e2f1b2ea5cea511c7f-XXX&enrichSource=Y292ZXJQYWdlOzMyOTc1ODg4NjtBUzo3MDg3MTk5MjI5MjE0NzJAMTU0NTk4MzQwNTM4NQ%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Muhammad_Ali125?enrichId=rgreq-a0962a018d84c4e2f1b2ea5cea511c7f-XXX&enrichSource=Y292ZXJQYWdlOzMyOTc1ODg4NjtBUzo3MDg3MTk5MjI5MjE0NzJAMTU0NTk4MzQwNTM4NQ%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Sanjida_Nasreen_Tumpa?enrichId=rgreq-a0962a018d84c4e2f1b2ea5cea511c7f-XXX&enrichSource=Y292ZXJQYWdlOzMyOTc1ODg4NjtBUzo3MDg3MTk5MjI5MjE0NzJAMTU0NTk4MzQwNTM4NQ%3D%3D&el=1_x_10&_esc=publicationCoverPdfDocument Concept Hierarchy Generation byExtracting Semantic Tree Using Knowledge GraphSanjida Nasreen Tumpa1, 2 and Muhammad Masroor Ali1, 31Department of CSE, Bangladesh University of Engineering and Technology (BUET), Dhaka, Bangladesh2tumpa.sanjida@gmail.com, 3mmasroorali@cse.buet.ac.bdAbstract—Semantic Web, as an extension of tradi-tional web, is concerned about the vast amount ofunstructured data, and with its motive to make theentire knowledge content machine readable, as well asmachine interpretable, all the processes of structuringthe data is highly significant. Knowledge representationin trees has been a familiar mechanism for some time.However, such representations lack in existence when itcomes to document content. In this paper, we presenta general mechanism that can generate a representa-tion of the concepts of any document in the form ofknowledge trees. We further gather knowledge fromknowledge graphs and analyze these data by mappingit with an existing ontology. Finally, we explain howthis can be used to create hierarchical concept recom-mendations to make the documents search efficient.Index Terms—Concept hierarchy, Content tree, In-formation retrieval, Semantic web, Text miningI. IntroductionIn this golden era of information technology, Semantic Web[1]–[3] plays a vital role by establishing well defined expres-sions to extract meaningful pattern and information. Most of theinformation in traditional web is unstructured and not suitablefor machines to retrieve intelligently. Semantic web focuses onstructuring the immeasurable amount of information to makethese information machine readable. As a result, analysis oftonnes of documents which seemed nearly impossible previ-ously can now be interpreted by machines.To extract knowledge from documents, the concept of textmining was introduced. Text mining [4] is the science of re-trieving high quality information from unstructured documentsand transform them into structured format for further analysis.Linking and retrieving structured information are also requiredto establish the concepts of SemanticWeb. Thus, the concepts ofknowledge graph and ontology possess their own significancein the field data linking. If the contents of documents can berepresented in a structured way, it will be possible to link coretopics with existing knowledge sources. Also, this will helpto integrate those into the existing taxonomy to categorize thedocument in order to make the searching process efficient.This paper proposes a novel approach to generate documentcontent tree that represents the texts in the document in a waythat the semantic relations are preserved. The approach alsofinds out the topics or concepts from the documents provided.This paper further intends to determine the hierarchical structureof core concepts by integrating the knowledge graphs with thecontent tree.The rest of this paper is organized as follows. Section IIprovides the overview of the relevant research works. In Sec-tion III, our motivation is</s>
|
<s>presented, which led us to work in thisarea. Section IV and Section V give the preliminary knowledgefollowed by the conceptual design of our approach. Section VIdemonstrates the experimental result of our system. Finally,Section VII concludes the paper.II. Literature ReviewMany researchers worked on the document content represen-tation for text mining or information retrieval [5]–[11]. Some ofthem considered the semantics of documents, some did not payany heed to it.The classical method of information retrieval, Booleanmodel, focused only on the presence of any word in the doc-ument without considering the semantic relations [5]. A popu-lar method for object categorization, the bag-of-words model,disregarded the semantic relation and order of words. However,it considers multiplicity [6]. Another notable one, Vector SpaceModel, reduced the limitation of binary weights by representingeach document in a vector space.However, the semantics ofwords was also not preserved [7]. Some researchers worked onthe extended versions of the above models to incorporate thesemantic relations along with term representation [8], [9], [11].To incorporate text structure and context with document con-tent representation, researchers have proposed some graphicalrepresentation based approaches. In [12], a graph based textrepresentation method was designed under word semantic spaceto obtain parts of speech, order, frequency, co-occurrence andcontext of words in the document. Apart from this, a graphbased text mining technique, GDClust has been proposed in[13] based on co-occurrence of frequent senses to present textdocument as hierarchical document graphs. In [14], a semanticsbased graph structure was proposed to hold more structuralinformation and mutual semantic relationship among words. In[15], a term graph model was proposed to represent the contentand relationship among words. In [16], a conceptual graphrepresentation of text was proposed using existing linguisticresources, verb net and word net.In [17], a method for extracting document fragment automat-ically from structured online documents with internal hierarchi-cal structure like HTML, XML, SGML etc. was discussed. In[18], sentence tree structure for document summarization wasused. The authors did not represent document content as a rootedtree.Besides enlightening the representation of document con-tents, we focused on the hierarchical representation of concepts.In [19], a hierarchical organization of concepts from a set of doc-uments was proposed. The authors preferred to use subsumptionto create the hierarchy of selected terms. In [20], an approach forconcept hierarchy using formal concept analysis was proposed.This approach is based on distributional hypothesis, and the hi-erarchy between terms have been decided considering syntacticdependencies.There are some other works related to text representation andconcept hierarchy, except the mentioned researches. However,no researcher focused on the collection of more knowledgeregarding the core concepts of a document from the knowledgebases to cluster the concepts and also to get the document hierar-chy. Furthermore, from the above discussion, it can be said thatthe concept of document content tree is a barely touched topic.Trees are used in the taxonomic representations of concepts,but according to semantic relation and context, it is not in thefield of document content representation. This paper intends tocontribute in such gaps of information retrieval and text mining.III. MotivationContent tree itself is quite significant for information retrievaland text mining. When considered any document containingample amount of text,</s>
|
<s>it may be possible for humans to under-stand the concepts, but it will not be possible for machines to un-derstand them. This particular requirement is getting universalattention as the present world focuses on parsing and analyzingdata in the minimum unit of time.Furthermore, establishing connections or links betweenknowledge from different sources is very much significant inSemantic Web. If the knowledge can be represented in well-defined data structures, say tree, for example, establishing suchconnections will become so much easier. Using a tree to repre-sent the concepts will contribute even more as it can representdata in hierarchical form and thus, machines will be able todiscard unnecessary data when working on a specific opera-tion, improving the search time and lowering the operationalcomplexity. Positive effects will be viewed in various aspects,mainly in case of knowledge clustering. Considering all thesepros, we were highly motivated to work on knowledge repre-sentation using content tree.IV. Document content tree and ConcepthierarchyA. Document Content TreeDocument content tree can be defined as a tree based repre-sentation of any document, that demonstrates the dependenciesamong words in that document. A content tree is basically anacyclic directed graph,G = {N,E} whereN and E are the setof nodes and edges respectively.In the content tree model proposed in this paper, every doc-ument is converted into a rooted tree based on the conceptspresent in it. There can be multiple trees or a forest, if thedocument discusses aboutmultiple topics. Content tree has threetypes of elements: root, nodes and edges. The general structureof a document content tree is as follows:• Root: A tree contains the information regarding the rootnode only. Root has been chosen from the main entities ofthe sentences in the provided document.• Nodes:Nodes denote the concepts of the document. Nodescan consist of single or multiple words.• Edges: A directed edge between two nodes resembles therelation between the nodes. Usually, verbs of the sentencesare chosen as the edges as verbs represent the relations inthe sentences.B. Concept HierarchyIn Natural Language Processing, concepts can be expressedas senses of a document. A single concept may consist of asingle word or can have multiple words. Concept hierarchyestablishes the hierarchical structure in this scenario. Concepthierarchy or taxonomy is a mechanism to demonstrate the gen-eralized hierarchical relationships among concepts. It ensuresefficient categorization of a document.V. Conceptual Overview of the SystemThe overall system can be dichotomized into twomajor parts:1) Document content tree generation,2) Concept hierarchy extraction.A. Document Content Tree GenerationThe generation of content tree varies with the language ofthe document. The tree generation module comprises of thefollowing steps:1) Tokenization and Preparing Tagged Document: Thedocument is segmented based on some pre-defined sep-arators. At this stage, we need to preserve some infor-mation together, like name, date etc. For instance, ifwe use white space as separator, the name “সািকব আল-হাসান” ([sakib al-Hasan], Sakib Al-Hasan) will be tok-enized into two tokens as “সািকব” ([sakib], Sakib) and“আল-হাসান” ([al-Hasan], Al-Hasan) which is not antici-pated. For this reason, external knowledge is incorporatedto get the desired tokens. We also extract information re-lated to a token. Some examples are the sentence positionof that particular token</s>
|
<s>in the provided document, tokenposition in a sentence and overall token position in thedocument.2) Coreference Resolution: A combination of heuristic andsupervised methods is used for coreference resolution[21]. It helps to find all expressions that refer to thesame entity. Therefore, our method recursively tracks thepossible antecedent and the pronoun is replaced by thereferred entity.3) Labeling and Filtration of Extracted Information: Anextensive dictionary is used to determine the parts ofspeech along with some additional knowledge regardingthe extracted tokens. For example, the parts of speech of“বাংলােদশ” ([baNladeS], Bangladesh) is Noun but morespecifically, it is the name of a Country. Furthermore, notall tokens hold significance for the content tree. Thus, weuse Bengali dictionaries for the less significant words, inorder to filter the tokens.4) Tree Construction: We construct the set of nodes andedges after filtration. Every edge in E is considered as adirected edge between two verticesNi andNj . The inputof the tree algorithm is the extracted information and theoutput is the representation of content tree.5) Tree Optimization: We merge some nodes to optimizethe content tree that results in the reduction of traver-sal time. Therefore, we propose some semantic rules tounify adjectives with noun, adjectives with adjectives etc.and we consider the following notations to establish thetheoretical terms for the rules: CurrNode is the presentnode, PrevNode is the immediate preceding node of thepresent one in the tree and ResultNode is the output nodeafter applying a rule. E1 is the edge between the previousnode of PrevNode and PrevNode itself. E2 is the edgebetween PrevNode and CurrNode. The semantic rules areas follows:• Rule #1:IF CurrNode→ Noun ∧ PrevNode→ NounTHENResultNode = MERGE (PrevNode, CurrNode),ResultEdge = MERGE ( E1, E2),POS_OF (ResultNode) = POS_OF (CurrNode)• Rule #2:IF CurrNode → Noun ∧ PrevNode → AdjectiveTHENResultNode = MERGE (PrevNode, CurrNode),ResultEdge = MERGE ( E1, E2),POS_OF (ResultNode) = POS_OF (CurrNode)• Rule #3:IF CurrNode → Month ∧ PrevNode → NumberTHENResultNode = MERGE (PrevNode, CurrNode),ResultEdge = MERGE ( E1, E2),POS_OF (ResultNode) = Date• Rule #4:IF CurrNode→ Adjective ∧ PrevNode→ AdjectiveTHENResultNode = MERGE (PrevNode, CurrNode),ResultEdge = MERGE ( E1, E2),POS_OF (ResultNode) = AdjectiveB. Document Concept Hierarchy Using Knowledge BasesConcept hierarchy increases efficiency of document retriev-ing in a large scale. To implement the concept hierarchy, wemaintain the following steps:1) Concept Extraction from Provided Document: Theabove mentioning document content tree helps us to ex-tract the major concepts of a document. Here, we considerthe roots of the trees in the set of core concepts.2) Knowledge Extraction Using Knowledge Bases: Afterextracting concepts from the document, existing knowl-edge bases, like Google Knowledge Graph [22], DBpedia[23], YAGO [24], WordNet [25] etc. are used to gathermore information. Knowledge graphs basically provideknowledge as linked data. The yielded information thenrequires to embed to be more meaningful.3) Word Embedding and Similarity Checking: The pro-cess of word embedding demonstrates a class of ap-proaches for representing words in a continuous vectorspace where semantically similar words are mapped tonearby points [26]. There are two popular methods ofword embedding from text, namely Word2Vec [27] andGloVe [28].We incorporated Word2Vec method to embed the infor-mation in the vector space to determine the similarityamong</s>
|
<s>the information extracted from the knowledgebases.4) Concept Clustering and Hierarchy Generation: Theextracted information is clustered based on the similarityweight. There are pre-defined cluster tags for all clusterswhich are obtained from DBpedia Ontology. The taxo-nomic representation of extracted information is gener-ated following the class hierarchy of this particular ontol-ogy.VI. Experimental ResultWe implemented the system using JAVA programming lan-guage. Though we have conducted our experiment on Ben-gali documents, the proposed approach will work similarly forEnglish documents also. We have used some modified ver-sion of Wikipedia pages eliminating the complex sentencesas input. After pre-processing the document, we used SQLitedatabase to keep the tagged tokens along with all informa-tion. A sample tagged sentence is like: “সািকব <Noun, Person,Male> আল-হাসান <Noun, Person> একজন <Noun, Number>বাংলােদশী <Adjective, Nationality> ি েকটার <Noun, Profes-sion>” ([sakib al-Hasan Ekd3 baNladeSi kriketar], Sakib Al-Hasan is a Bangladeshi cricketer). After that, we constructedthe tree following the proposed algorithm. Then, we extractedinformation from existing knowledge bases using the roots ofthe content tree. As of now, we have used Google knowledgegraph and DBpedia for this purpose. Then, we have used DB-pedia ontology to cluster the data in order to create the concepthierarchy. Figure 1 demonstrates the resultant content tree alongwith concept hierarchy. The accuracy of our system for simplesentences are quite satisfactory. It could identify 97-98 nodesamong 100 nodes.VII. ConclusionDocument hierarchy based on content tree will contributepredominantly in the taxonomy of documents. The tree repre-sentation will also help us to merge any document with the sameconcepts to increase connectivity of knowledge efficiently. Inaddition, this will open up a vast field of research area torepresent the documents in a more structural manner, makingit more search efficient and ultimately achieving the vision ofSemantic web.সািকব আল-হাসান (Shakib al-Hasan)বামহািত অেথ�াড� ি�নার(Left handed Orthodox spinner) বামহািত িমডল অড� ার ব�াটসম�ান (Left-handed middle-orderbatsman) বাংলােদিশ ি�েকটার(Bangladeshi cricketer)২৪ মাচ� ১৯৮৭ সােল (On 24 March 1987) জ� (Birth)২০০৬ সােলর আগ� মােস (In August 2006)িজ�াব� েয় (Zimbabwe)একিদেনর আ�জ� ািতক ম�াচ(One-day international match)েষক (Inauguration) িবে�র(In the world)�সরা অলরাউ�ার(Best all-rounder)ক� িত� (Achievement)রেয় বাংলােদশ �ীড়া িশ�া �িত�ান(Bangladesh Sports Education Institute) �া�ন িশ�াথী�(Ex-student)মাইেকল �জােসফ জ�াকসন (Michael Joseph Jackson)পপ স�ীেতর (Pop music) রাজা (King) পিরবােরর (In family) ৮ম স�ান (8th child) ৫ বছর (5 years) বয়েস (of Age) ১৯৬৩ সােল (In 1963) �পশাদার স�ীতিশ�ী (Professional musician) পৃিথবীর (In Earth's) ইিতহােসর (History) সবেচেয় জনি�য় সবেচেয় ব�ল িবি�ত এলবােমর (The most popular best-selling album) স�ীতিশ�ীেদর (Musicians)অন�তম (One among many) মা�ক�ন স�ীতিশ�ী (American musician)নৃত�িশ�ী (Dancer)�লখক (Writer) অিভেনতা (Actor) সমাজেসবক (Social worker)ব�বসায়ী (Businessman)২৯ আগ� ১৯৫৮ (29 August 1958)২৫ জুন ২০০৯ সােল (On June 25, 2009), মৃত�irtআ��কাশ কেরন(Debut)Singer PlayerPersonেষক (Inauguration) অিভেষক (Inauguration) , মৃত�irtআ��কাশ কেরন(Debut)আ��কাশ কেরন(Debut)Figure 1. Document content tree generated from the document on “Sakib Al-Hasan” and “Michael Joseph Jackson”.References[1] T. Berners-Lee, J. Hendler, and O. Lassila, “The semantic web,” Scientificamerican, vol. 284, no. 5, pp. 34–43, 2001.[2] P. Hitzler, M. Krotzsch, and S. Rudolph, Foundations of semantic webtechnologies. CRC Press, 2009.[3] L. Yu, A developer’s guide to the semantic Web. Springer Science &Business Media, 2011.[4] A.-H. Tan et al., “Text mining: The state of the art and the challenges,”in Proceedings of the</s>
|
<s>PAKDD 1999 Workshop on Knowledge Disocoveryfrom Advanced Databases, vol. 8. sn, 1999, pp. 65–70.[5] A. H. Lashkari, F. Mahdavi, and V. Ghomi, “A boolean model in in-formation retrieval for search engines,” in Information Management andEngineering, 2009. ICIME’09. International Conference on. IEEE, 2009,pp. 385–389.[6] Y. Zhang, R. Jin, and Z.-H. Zhou, “Understanding bag-of-words model:a statistical framework,” International Journal of Machine Learning andCybernetics, vol. 1, no. 1-4, pp. 43–52, 2010.[7] G. Salton, A. Wong, and C.-S. Yang, “A vector space model for automaticindexing,” Communications of the ACM, vol. 18, no. 11, pp. 613–620,1975.[8] P. Wiemer-Hastings, K. Wiemer-Hastings, and A. Graesser, “Latent se-mantic analysis,” inProceedings of the 16th international joint conferenceon Artificial intelligence. Citeseer, 2004, pp. 1–14.[9] G. Salton, E. A. Fox, and H. Wu, “Extended boolean information re-trieval,” Communications of the ACM, vol. 26, no. 11, pp. 1022–1036,1983.[10] W. Waller and D. H. Kraft, “A mathematical model of a weighted booleanretrieval system,” Information Processing & Management, vol. 15, no. 5,pp. 235–245, 1979.[11] E. A. Fox, “Extending the boolean and vector space models of informationretrieval with p-norm queries and multiple concept types,” Ph.D. disser-tation, Cornell University, Ithaca, NY, USA, 1983.[12] F. Zhou, F. Zhang, and B. Yang, “Graph-based text representation modeland its realization,” in Natural Language Processing and KnowledgeEngineering (NLP-KE), 2010 International Conference on. IEEE, 2010,pp. 1–8.[13] M. S. Hossain and R. A. Angryk, “Gdclust: A graph-based document clus-tering technique,” in Data Mining Workshops, 2007. ICDM Workshops2007. Seventh IEEE International Conference on. IEEE, 2007, pp. 417–422.[14] J. Wu, Z. Xuan, and D. Pan, “Enhancing text representation for classi-fication tasks with semantic graph structures,” International Journal ofInnovative Computing, Information and Control (ICIC), vol. 7, no. 5,2011.[15] W.Wang, D. B. Do, andX. Lin, “Term graphmodel for text classification,”in International Conference on Advanced Data Mining and Applications.Springer, 2005, pp. 19–30.[16] S. Hensman, “Construction of conceptual graph representation of texts,”in Proceedings of the Student Research Workshop at HLT-NAACL 2004.Association for Computational Linguistics, 2004, pp. 49–54.[17] V. Maslov, “Method for extracting digests, reformatting, and automaticmonitoring of structured online documents based on visual programmingof document tree navigation and transformation,” Mar. 25 2003, uS Patent6,538,673.[18] Y. Kikuchi, T. Hirao, H. Takamura, M. Okumura, and M. Nagata, “Singledocument summarization based on nested tree structure,” in Proceedingsof the 52nd Annual Meeting of the Association for Computational Linguis-tics (Volume 2: Short Papers), vol. 2, 2014, pp. 315–320.[19] M. Sanderson and B. Croft, “Deriving concept hierarchies from text,” inProceedings of the 22nd annual international ACM SIGIR conference onResearch and development in information retrieval. ACM, 1999, pp.206–213.[20] P. Cimiano, A. Hotho, and S. Staab, “Clustering concept hierarchiesfrom text,” in Proceedings of the Conference on Lexical Resources andEvaluation (LREC), 2004.[21] W. M. Soon, H. T. Ng, and D. C. Y. Lim, “A machine learning approachto coreference resolution of noun phrases,” Computational linguistics,vol. 27, no. 4, pp. 521–544, 2001.[22] “Google knowledge graph api,” [Online; accessed 27-July-2018].[Online]. Available: https://developers.google.com/knowledge-graph/[23] S. Auer, C. Bizer, G. Kobilarov, J. Lehmann, R. Cyganiak, and Z. Ives,“Dbpedia: A nucleus for a web of open data,” in The semantic web.Springer, 2007, pp. 722–735.[24] F.</s>
|
<s>M. Suchanek, G. Kasneci, and G. Weikum, “Yago: A large ontologyfrom wikipedia and wordnet,” Web Semantics: Science, Services andAgents on the World Wide Web, vol. 6, no. 3, pp. 203–217, 2008.[25] G. A. Miller, “Wordnet: a lexical database for english,” Communicationsof the ACM, vol. 38, no. 11, pp. 39–41, 1995.[26] “Vector representations of words,” 2018, [Online; accessed 3-August-2018]. [Online]. Available: https://www.tensorflow.org/tutorials/representation/word2vec[27] Y. Goldberg and O. Levy, “word2vec explained: deriving mikolovet al.’s negative-sampling word-embedding method,” arXiv preprintarXiv:1402.3722, 2014.[28] J. Pennington, R. Socher, and C. Manning, “Glove: Global vectors forword representation,” in Proceedings of the 2014 conference on empiricalmethods in natural language processing (EMNLP), 2014, pp. 1532–1543.View publication statsView publication statshttps://www.researchgate.net/publication/329758886</s>
|
<s>M.Sc. Engg. (CSE) ThesisDeveloping a Concept-Level Polarity DetectionModel through Generation of a Rule BasedSemantic Parser for Bengali SentencesSubmitted byMd Fazle Rabbi1015052096Supervised byDr. Muhammad Masroor AliSubmitted toDepartment of Computer Science and EngineeringBangladesh University of Engineering and TechnologyDhaka, Bangladeshin partial fulfillment of the requirements for the degree ofMaster of Science in Computer Science and EngineeringMarch 2019Dedicated to my parentsAcknowledgementI would first convey my heartfelt thanks to my thesis supervisor Professor Dr. MuhammadMasroor Ali for being so generous in guiding me whenever I ran into a trouble spot or hada question about my research or writing. He steered me in the right direction wheneverI needed it. His patience guidance helped me in all the time of writing of this thesis. Icould not have imagined having a better supervisor and mentor for my MSc study andonward.Besides my supervisor, I would like to thank the rest of my thesis committee: ProfessorDr. Md. Mostofa Akbar, Professor Dr. M. Kaykobad, Associate Professor Dr. Md. RifatShahriyar, and Professor Dr. Md. Mahbubur Rahman for their insightful comments andencouragement, but also for the hard question which incented me to widen my thesis fromvarious perspectives.I would be always grateful to the Bangladesh Army for giving me the opportunity toattend my MSc study. I would also be thankful to MIST authority for sparing me as andwhen required. I also acknowledged the cooperation of my colleagues, who were alwaysat my side.Finally, I must express my profound gratitude to my spouse for providing me with unfailingsupport and continuous encouragement throughout my years of study and through theprocess of researching and writing this thesis. Last but not the least, I would like to thankmy parents and family for inspiring me spiritually throughout my life in general.DhakaMarch 11, 2019Md Fazle Rabbi1015052096iiiContentsCandidate’s Declaration iBoard of Examiners iiAcknowledgement iiiList of Figures viiList of Tables viiiAbstract ix1 Introduction 11.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Problem Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.3 Research Aim and Objective . . . . . . . . . . . . . . . . . . . . . . . . . . 31.4 Overview of the Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.5 Thesis Contribution and Final Outcome . . . . . . . . . . . . . . . . . . . 61.6 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Related Works 82.1</s>
|
<s>Sentiment Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.1.1 Rule Based Approaches . . . . . . . . . . . . . . . . . . . . . . . . 92.1.2 Machine Learning Approaches . . . . . . . . . . . . . . . . . . . . . 102.1.3 Concept Based Approaches . . . . . . . . . . . . . . . . . . . . . . 122.2 Sentence Parsing and Concept Extraction . . . . . . . . . . . . . . . . . . 152.3 Sentiment Analysis in Bengali . . . . . . . . . . . . . . . . . . . . . . . . . 162.4 Scope of the Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172.5 Research Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 Preliminaries 203.1 NLP Fundamentals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203.1.1 Lexicon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203.1.2 Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213.1.3 Corpus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213.1.4 Semantic Dependency . . . . . . . . . . . . . . . . . . . . . . . . . 223.1.5 Opinion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233.1.6 Sentiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243.2 NLP Techniques . . . . . . . . . . . . . . . . . . . . . . .</s>
|
<s>. . . . . . . . . 243.2.1 Tokenization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243.2.2 POS Tagging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253.2.3 Parsing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253.3 NLP Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263.3.1 Concept Net . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263.3.2 WordNet-Affect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273.3.3 AffectiveSpace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283.4 Mathematical Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293.4.1 Singular Value Decomposition (SVD) . . . . . . . . . . . . . . . . . 293.4.2 Linear Discriminant Analysis (LDA) . . . . . . . . . . . . . . . . . 304 Proposed Methodology 334.1 Overview of the Methodology . . . . . . . . . . . . . . . . . . . . . . . . . 334.2 Data Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344.3 Parse Tree Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364.4 Concept Extraction and Dependency Detection . . . . . . . . . . . . . . . 384.5 Polarity Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414.5.1 Construction of the Polarity Detection Model . . . . . . . . . . . . 424.5.2 Determination of the Polarity of the Sentence</s>
|
<s>. . . . . . . . . . . . 445 Experimental Analysis 475.1 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475.2 Experimental Data Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485.3 Evaluation Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495.4 Result Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505.4.1 Performance Analysis on Concept Extraction . . . . . . . . . . . . 515.4.2 Evaluation on Classification of Training Data . . . . . . . . . . . . 515.4.3 Analysis of the Polarity Detection Model . . . . . . . . . . . . . . . 545.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 576 Conclusion 586.1 Contribution of the Work . . . . . . . . . . . . . . . . . . . . . . . . . . . 586.2 Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 596.3 Scope of Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59References 61List of Figures1.1 An overview of the proposed work. . . . . . . . . . . . . . . . . . . . . . . 42.1 Rule based classification scheme integrated with different NLP resources(collected from [1]). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.2 Fragment of a ConceptNet (collected from [2]). . . . . . . . . . . . . . . . 132.3 A sketch of AffectiveSpace [3]. Affectively positive concepts (in the bottom-left corner) and affectively negative concepts (in the up-right corner) arefloating in the multi-dimensional vector space. . . . . . . . . . . . . . . . . 132.4 The 3D model</s>
|
<s>and the net of the Hourglass of Emotions [4]. Since affectivestates go from strongly positive to null to strongly negative, the modelassumes an hourglass shape. . . . . . . . . . . . . . . . . . . . . . . . . . . 142.5 Parse Tree generated by Stanford Parser [5]. . . . . . . . . . . . . . . . . . 163.1 Generation of a Parse Tree using POS information. . . . . . . . . . . . . . 263.2 Stanford Dependency Tree [6]. . . . . . . . . . . . . . . . . . . . . . . . . 273.3 A-Labels and corresponding example synsets (collected from [7]) . . . . . . 283.4 A Sketch of the AffectiveSpace (collected from [3]). . . . . . . . . . . . . . 293.5 Boundary line to separate 2 classes using LDA. . . . . . . . . . . . . . . . 323.6 Uncorrelated (left) and correlated (right) normal distribution. . . . . . . . 324.1 Workflow diagram of the proposed methodology. . . . . . . . . . . . . . . . 344.2 Rule based parse tree of a complex sentence generated from annotated data. 394.3 Concepts extracted from parse tree and their association within the sentence. 414.4 Overview of the concept based polarity detection model. . . . . . . . . . . 424.5 Sentence level polarity detection through tree traversal. . . . . . . . . . . . 455.1 Positional overview of the data points based on the canonical discriminantfunction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525.2 Summary result of classification of training data with three polarity classes. 535.3 Summary result of classification of training data with two polarity classes. 535.4 Fraction of classification function coefficient matrix. . . . . . . . . . . . . . 54viiList of Tables4.1 Tag list used in annotated data with examples. . . . . . . . . . . . . . . . 364.2 Various forms of auxiliary verb depending on the position within the sentence. 384.3 Concepts with their eigenvalues and class labels to train the model. . . . . 424.4 Classification function coefficient matrix. . . . . . . . . . . . . . . . . . . . 435.1 Statistics of the training data set. . . . . . . . . . . . . . . . . . . . . . . . 485.2 Statistics of the corpus to evaluate the model. .</s>
|
<s>. . . . . . . . . . . . . . . 495.3 Confusion Matrix for precision and recall. . . . . . . . . . . . . . . . . . . 495.4 Performance of the concept extractor. . . . . . . . . . . . . . . . . . . . . . 515.5 Confusion Matrix for the concept with positive polarity. . . . . . . . . . . 555.6 Confusion Matrix for the concept with negative polarity. . . . . . . . . . . 555.7 Confusion Matrix for the concept with neutral polarity. . . . . . . . . . . . 555.8 Values of the metrics for the polarity classes of concept. . . . . . . . . . . . 565.9 Performance evaluation for polarity detection at sentence level. . . . . . . . 56viiiAbstractPublic opinion over the Internet is getting importance with the rapidgrowth of online content every day. The sentiment of public opinion isconsidered a valuable piece of information in every interaction of humanlife. Concept-based approaches are the recent evolution in sentiment analysis,which is intended to infer the semantic and affective information associatedwith natural language opinion. Sentiment analysis at the concept levelintroduces a new opportunity for information retrieval like polarity detection,especially for a less privileged language like Bengali. In this work, a rule-based semantic parser is developed to generate the parse tree for a Bengalisentence. Concepts are extracted from the parse tree exploring the dependencyamong the constituents of the sentence. A domain specific classification modelis proposed to detect the polarity of the concepts which in turn are usedto find the sentence polarity through the parse tree traversal. Here, theAffectiveSpace is used as a knowledge base. Training data on targeted domainis generated from online contents using term frequency and inverse documentfrequency (tf-idf) where the concepts are labeled as positive, negative andneutral. The model uses the Linear Discriminant Analysis (LDA) to classifythe training data where 81.8 percent of original grouped concepts correctlyclassified. The performance of the polarity detection method is evaluatedusing the precision and recall method. The overall accuracy for concept-levelpolarity detection is 70.24 percent. Whereas the accuracy at the sentence levelis 65.63 percent for the simple sentence, and 73.77 percent for the complex orcompound sentence, which can be considered an acceptable range for a lessprivileged language like Bengali. One of the limitations of the work is its failureto achieve the desired level of abstraction in forming the concept due to thelanguage complexity of Bengali. Therefore, it is fully dependent on the termsavailable within the sentence and translate those to English for mapping in theAffectiveSpace. However, An independent dependency parser for Bengali canbe generated by integrating the language morphology along with the languagesyntax to extract the concept with a high level of abstraction. Moreover, thegeneration of a Bengali affect space can be of great use in the field of NLP.Chapter 1IntroductionWith</s>
|
<s>the rapid development of the World Wide Web, human activities over Internet is growingvast. People have an inherent curiosity to discover what others are thinking. Public opinionis also getting importance in every interaction of personal, professional, social and politicalsectors. Online newspaper, blogs, discussion groups, tweets and comments on social andelectronic media are a great source of public opinion. The opportunities to retrieve the publicopinion from these unstructured data have opened up the area of research on Natural LanguageProcessing (NLP) especially, the Sentiment Analysis (SA). The sentiment of the public opinionis considered a valuable piece of information to the business organization, social workers,government bodies and even law enforcement agencies for decision making. Polarity detectionat sentence or concept level further enhance the applications.1.1 BackgroundExisting works on NLP [8] can be broadly divided into two main categories. Firstly, the rulebased approaches are focused on the construction of NLP tools like Parts of Speech (POS) tagger,Named Entity Recognizer etc. Since these methods explore the relationship among the lexiconswithin the sentence or document, therefore, these are language and domain independent andgreatly rely on some knowledge bases. On the other hand, machine learning approaches utilizethe knowledge through training data and use some classification methods for categorizationof the sentiment. However, most of the recent work on sentiment analysis integrates boththe approaches to overcome the limitations of both, and thereby, improve the efficiency ofthe works. In addition, recent evolution in the field of sentiment analysis known as affectivecomputing [2] is getting popularity in the NLP research community. The novelty of thisparadigm is its capability to merge the linguistic techniques with common-sense computing.Thereby, it facilitates to properly deconstruct the text into concepts, hence, improve the accuracyof polarity detection.1.1. BACKGROUND 2Polarity is detected in document, sentence or concept level. Feature based classifications areefficient in sentence or document level, but perform poorly in concept level as they rely on itscontextual meaning. As concepts are independent of languages, it facilitates using availableresources in other languages such as SentiWordNet [9] and WordNet Affect [7] for sentimentanalysis. A significant number of words can take different polarities when associated with otherwords to form a concept. Concept can be categorized within the affective space generatedfrom WordNetAffect intended to infer the conceptual and affective information for a specificdomain. Therefore, AffectiveSpace [3] is considered a good resource to improve the efficiencyof sentiment analysis at concept level.Bengali being the national language of Bangladesh as well as the sixth highest spoken languagein the world [10] draws the attention of the NLP researchers. The contents in Bengali over theInternet are also increasing every day through social media, news portals, blogs and other onlineplatforms. These offer a huge volume of unstructured text data for information retrieval. Notmuch effort has been taken up in Bengali languages on polarity detection at concept level. Thescarcity of NLP tools and linguistic resource makes the NLP task in Bengali more difficultand inconsistent. Parsing the sentence is the precondition for any NLP task. Whereas, norecognizable parser is available for Bengali text that can decompose the Bengali sentence intoterms</s>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.