text
stringlengths 41
31.4k
|
|---|
<s>rate was set to 0.001 with which thenetwork starts to train itself but as mentioned earlier the learning rate isbeing adapted step wise here by using Eq. 2. Here last epoch , the value ofStep Wise LR will be updated as if an epoch completes all its steps.Step Wise LR = (base lr ∗ gamma ∗ (last epochstep size)) (2)4 ResultsThe task of finding a model which will detect the signs based on ASL was dividedinto two parts. In the first segment, two custom models were built from whichaccuracy of 86.52% and in the second segment, an accuracy of 85.88% wasachieved using pre-trained models on Dataset-A.438 P. Paul et al.4.1 Results from Custom ModelTo evaluate the model several approaches were taken. At first, two custommodels (custom-model-A) and (custom-model-B) were created using the corre-sponding configurations mentioned in Sect. 3.3. Using the custom-model-A men-tioned in Table 6, 77.19% accuracy was achieved while validating imagesfrom Dataset-A. Here to minimize overfitting 40% and 60% of dropout, L2regularization and Global Average Pooling (GAP) were used. After 25epochs a training accuracy of 96.33% and a validation accuracy of 77.19%were achieved using 5,214,840 trainable parameters for RGB images.Table 6. Results obtained using the custom modelsModel name No. oftrainableparametersDataSet-A DataSet-BTrainingaccuracy(%)Validationaccuracy(%)Trainingaccuracy(%)Validationaccuracy(%)Custom-Model-A 5,214,840 96.33 77.19 86.54 66.79Custom-Model-B 428,728 98.54 86.52 89.45 62.16The custom-model-B from Table 6 which architecture was discussed inSect. 3.3, gave the best validation accuracy compared to the custom-model-Afor Dataset-A.Between two custom models, training and validation accuracy for each modelin every epoch are recorded to find out the best model that gives comparativelybetter validation accuracy. From Fig. 3, we can see that after a certain epochtraining accuracy highlighted with blue color remains almost the same wherevalidation accuracy highlighted with orange color drops and doesn’t increaseFig. 3. Illustration of training and validation accuracy of the proposed Custom-Model-B(Color figure online)A Modern Approach for Sign Language Interpretation Using CNN 439prominently. This indicates that there was no need for running the model afterthat certain epochs. To overcome overfitting some regularization techniques suchas Dropout, L2 Regularization were applied by tuning the hyperparameterswhich lead to the best performance on the validation set. For this work, 3 dif-ferent instances of dropout value for custom model-B were considered wheredropping 60% of neurons reduces the overall validation loss by an amount of0.25 that helped to increase the validation accuracy.4.2 Results from Transfer LearningTable 7. Results from pre-trained models using DataSet-APre-trained model No. of totalparametersDataSet-ATraining accuracy (%) Validation accuracy (%)MobileNetV2 4,297,816 99.88 84.93NASNetMobile 5,044,012 99.60 85.88DenseNet121 7,467,480 96.18 76.92VGG19 29,076,312 84.75 59.93VGG16 20,024,384 86.50 55.57Fig. 4. Illustration of training and validation accuracy of the best two transfer learningmodels440 P. Paul et al.To improve the validation accuracy, fine-tuning process was introduced where themodel was initialized using the technique mentioned in Sect. 3.4. From this con-figuration, with trainable parameters-9, 051, 928 and non-trainable parameters-20, 024, 384 a validation accuracy of 55.57% was achieved using VGG16 modelwhich weights were pre-trained on imagenet dataset and from VGG19 withTrainable parameters-9, 051, 928 & Non-trainable parameters-14, 714, 688, a val-idation accuracy of 59.93% was</s>
|
<s>achieved where the training accuracy was 84.75%.In both the models, parameters except in fully connected layers were beingfrozen. As this result was not even close to our custom models, a different tech-nique with other pre-trained models was implied. With this technique, the toplayers or fully connected layers of the model was first trained for 10 epochs,then the weights of all the pre-trained layer and the top layer were unfrozen andthe same model was trained for the second time. In the first scenario, when themodel was only trained with top layers weighs the activation function “Softmax”that relied upon the last fully connected layers trained itself in a way that whenin the second time model retrained itself for 25 epochs, it gives much better val-idation accuracy mentioned in Table 7. From this process using ‘MobileNetV2’& ‘NASNetMobile’ model’s pre-trained weights with 2072 and 1176 correspond-ing neurons, accuracy of 84.93% and 85.88% were recorded. In the case ofDensenet121, VGG16 and VGG19 same configuration could not be applied asthere is a huge number of parameters or weights in terms of memory. In case of allthe pre-trained models, “MobilenetV” and “NASNetMobile” gives linear growthin terms of validation accuracy. From Fig. 4 we can see that, after running forseveral epochs, validation accuracy has gone lower for the first 3–4 epochs, thenit jumps to 75% and gradually increases to 84% and stabilizes for the remainingepochs. On the other hand, the training accuracy gains 98% accuracy in first5–6 epoch and remain stable for the rest of the epochs.4.3 Discussion on ResultsThe previous work that gave best validation accuracy based on ASL fin-gerspelling dataset was conducted by Pugeault and Bowden [18] where theyrecorded accuracy on three different instances. They obtained 73% accuracy forusing only RGB images, 69% for using only depth information and 75% accu-racy for using RGB+depth images. In our work, we have considered only twoinstances as we only used RGB(“DataSet-A”) and Depth+RGB(“DataSet-B”)to measure performances. Although our customized models could not performbetter on “DataSet-B” compared to their [18] work but all the other modelsperformed better than [18] on RGB images. A total of 240 unseen color imageswere used to measure f1 score of both the customized models. Both the modelsA Modern Approach for Sign Language Interpretation Using CNN 441were asked to measure ground truth values of 10 images from each class. Basedon the precision and recall values, f1 score was then generated for each class thatis shown in Table 8.Fig. 5. Illustration of different scenarios of our custom models predictionsFor “Custom-Model-A” recall values are significantly higher than the preci-sion values for classes k,m, o, v where for “Custom-Model-B” those classes ared, q, w. The reason behind this might be because signs of c and o, w and f, dand l, m and n, k and r shown in Fig. 5 are quite similar which is why modelsmay get confused while classifying for those particular classes. In case of boththe models, the classifiers could not predict n, r out of given images. In caseof letter c, f “Custom-Model-A” shows small</s>
|
<s>confusion as the precision valuesare slightly lower than the recall values for those classes wherein for “Custom-Model-B” those classes are l, t. Although for some classes the custom modelscould not give accurate predictions overall performance of both the models wasgood as the macro-average value of “Custom-Model-A” is nearly 59% and for“Custom-Model-B” it is nearly 68%.442 P. Paul et al.Table 8. F1 score obtained from customized modelsClass F1-score Predicted accuratelyCustom Model-A Custom-Model-B Custom-Model-A Custom Model-Ba 0.89 1.00 8 10b 0.89 0.17 8 1c 0.83 1.00 10 10d 0.00 0.46 0 4e 0.46 0.75 3 9f 0.91 0.89 10 10g 0.95 1.00 9 10h 1.00 0.9 10 8i 0.84 0.68 8 7k 0.33 0.95 8 10l 1.00 0.79 10 7m 0.62 0.93 10 10n 0.00 0.00 0 0o 0.12 0.36 1 3p 0.3 1.00 3 10q 0.00 0.57 0 7r 0.00 0.00 0 0s 1.00 1.00 10 10t 0.53 0.67 4 5u 0.95 0.89 9 8v 0.27 0.74 4 6w 0.75 0.24 6 2x 0.00 0.71 0 7y 0.83 0.35 10 95 ConclusionIn this paper, we present an image-based comparison wise approach to findingmodels that can interpret sign languages in a much more efficient way from ASLfinger Spelling dataset. For that, we have developed two custom models andseveral transfer learning models based on convolutional neural network. Then fortraining and validating the network, two approaches were considered in whichone approach was to use only RGB images and the other one was to use bothRGB and depth information. Our classification results of RGB images exceededall the previous models. For further improvement, the letters j and z will beincluded in the video dataset which will be utilized to recognize continuoushand signs.A Modern Approach for Sign Language Interpretation Using CNN 443References1. Anderson, R., Wiryana, F., Chandra, M., Putra, G.: Sign language recognitionapplication systems for deaf-mute people: a review based on input-process-output.Procedia Comput. Sci. 116, 441–448 (2017)2. Arge, F.O.R.L., Mage in CI: Vdcnl-s i r, pp. 1–14 (2015)3. 2014 IEEE International Conference on Advanced Communications, Control andComputing Technologies, pp. 1412–1415 (2014)4. Núñez Fernández, D., Kwolek, B.: Hand posture recognition using convolutionalneural network. In: Mendoza, M., Velast́ın, S. (eds.) CIARP 2017. LNCS, vol.10657, pp. 441–449. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-75193-1 535. Ghotkar, A., Kharate, G.K.: Study of vision based hand gesture recognition usingIndian sign language (2017)6. Chollet, F.: Xception: deep learning with depthwise separable convolutions (2014)7. Hoque, T., Kabir, F.: Automated Bangla sign language translation system:prospects, limitations and applications, pp. 856–862 (2016)8. Hosoe, H., Sako, S.: Recognition of JSL finger spelling using convolutional neuralnetworks, pp. 85–88 (2017)9. Huang, G., Weinberger, K.Q.: Densely connected convolutional networks (2016)10. Karabasi, M., Bhatti, Z., Shah, A.: A model for Real-time recognition and tex-tual representation of Malaysian sign language through image processing. In: 2013International Conference on Advanced Computer Science Applications and Tech-nologies (2013)11. Karmokar, B.C., Alam, K.R., Siddiquee, K.: Bangladeshi sign language recognitionemploying neural network ensemble (2012)12. Kishore, P.V.V., Kumar, P.R.: Segment, track, extract, recognize and convert signlanguage videos to voice/text. IJACSA 3, 35–47 (2012)13. Koller, O., Forster, J., Ney, H.: Continuous sign language</s>
|
<s>recognition: towards largevocabulary statistical recognition systems handling multiple signers. Comput. Vis.Image Underst. 141, 108–125 (2015)14. Kumar, P.K., Prahlad, P., Loh, A.P.: Attention based detection and recognition ofhand postures against complex backgrounds (2012)15. Masood, S., Srivastava, A., Thuwal, H.C., Ahmad, M.: Real-time sign language ges-ture (word) recognition from video sequences using CNN and RNN. In: Bhateja,V., Coello Coello, C.A., Satapathy, S.C., Pattnaik, P.K. (eds.) Intelligent Engineer-ing Informatics. AISC, vol. 695, pp. 623–632. Springer, Singapore (2018). https://doi.org/10.1007/978-981-10-7566-7 6316. Mekala, P., Gao, Y., Fan, J., Davari, A.: Real-time sign language recognition basedon neural network architecture, pp. 195–199 (2011)17. Prajapati, R., Pandey, V., Jamindar, N., Yadav, N., Phadnis, P.N.: Hand gesturerecognition and voice conversion for deaf and dumb. IRJET 5, 1373–1376 (2018)18. Pugeault, N., Bowden, R.: Spelling it out: real-time ASL fingerspelling recognition(2011)19. Rahaman, M.A., Jasim, M., Ali, H.: Real-time computer vision-based Bengali signlanguage recognition, pp. 192–197 (2014)20. Rajam, P.S., Balakrishnan, G.: Real time Indian sign language recognition systemto aid deaf-dumb people, pp. 1–6 (2011)21. Rao, G.A., Kishore, P.V.: Selfie video based continuous Indian sign language recog-nition system. Ain Shams Eng. J. 9, 1929 (2017)https://doi.org/10.1007/978-3-319-75193-1_53https://doi.org/10.1007/978-3-319-75193-1_53https://doi.org/10.1007/978-981-10-7566-7_63https://doi.org/10.1007/978-981-10-7566-7_63444 P. Paul et al.22. Sandler, M., Zhu, M., Zhmoginov, A., Howard, A., Chen, L.-C.: MobileNetV2:inverted residuals and linear bottlenecks (2018)23. Savur, C.: Real-time American sign language recognition system by using surfaceEMG signal, pp. 497–502 (2015)24. Sarawate, N., Leu, M.C., ÖZ, C.: A real-time American sign language word recog-nition system based on neural networks and a probabilistic model. Turk. J. Electr.Eng. Comput. Sci. 23, 2107–2123 (2015)25. Seth, D., Ghosh, A., Dasgupta, A., Nath, A.: Real time sign language processingsystem. In: Unal, A., Nayak, M., Mishra, D.K., Singh, D., Joshi, A. (eds.) Smart-Com 2016. CCIS, vol. 628, pp. 11–18. Springer, Singapore (2016). https://doi.org/10.1007/978-981-10-3433-6 226. Singha, J., Das, K.: Recognition of Indian sign language in live video. Int. J. Com-put. Appl. 70, 17–22 (2013)27. Springenberg, J.T., Dosovitskiy, A., Brox, T., Riedmiller, M.: Striving for simplic-ity: the all convolutional net, pp. 1–14 (2015)28. Szegedy, C., Vanhoucke, V., Shlens, J., Wojna, Z.: Rethinking the inception archi-tecture for computer vision (2014)29. Tripathi, K., Baranwal, N., Nandi, G.C.: Continuous Indian sign language gesturerecognition and sentence formation. Procedia Comput. Sci. 54, 523–531 (2015)30. Uddin, S.J.: Bangla sign language interpretation using image processing (2017)31. Wazalwar, S., Shrawankar, U.: Interpretation of sign language into English usingNLP techniques. J. Inf. Optim. Sci. 38, 895 (2017)32. Zoph, B., Shlens, J.: Learning transferable architectures for scalable image recog-nition (2017)View publication statsView publication statshttps://doi.org/10.1007/978-981-10-3433-6_2https://doi.org/10.1007/978-981-10-3433-6_2https://www.researchgate.net/publication/335359908 A Modern Approach for Sign Language Interpretation Using Convolutional Neural Network 1 Introduction 2 Related Works 3 Experimental Setup 3.1 Dataset 3.2 Data Preprocessing and Feature Extraction 3.3 The Proposed Architecture 3.4 Transfer Learning Using Pre-trained Models 3.5 Training Details 4 Results 4.1 Results from Custom Model 4.2 Results from Transfer Learning 4.3 Discussion on Results 5 Conclusion References</s>
|
<s>Proceedings of the...D S Sharma, R Sangal and E Sherly. Proc. of the 12th Intl. Conference on Natural Language Processing, pages 413–418,Trivandrum, India. December 2015. c©2015 NLP Association of India (NLPAI)Acoustic Correlates of Voicing and Gemination in BanglaAanusha GhoshCenter for LinguisticsJawaharlal Nehru UniversityNew Delhiaanusha.ghosh@gmail.comAbstractThe goal of this paper is to conduct anacoustic phonetic investigation of both pri-mary and secondary cues that aid in the dis-tinction between voiced and voiceless gem-inates in Bangla. Results of the statisticalanalyses examining both duration and non-durational correlates show that besides clo-sure duration secondary cues such as the am-plitude of the stop release burst and the fun-damental frequencies of the vowels immedi-ately preceding and following the obstruentare acoustically significant in the distinctionbetween voiced and voiceless geminates andsingletons. Voiced stops have significantlygreater burst amplitudes than voiceless ones,and geminates are flanked by vowels with sig-nificantly higher F0’s than those flanking cor-responding singletons.We also briefly explore the effects of gemi-nation on V-to-V coarticulation. The assump-tion is that longer consonantal duration willact as a more effective barrier to V-to-V coar-ticulation in the case of geminates as opposedto singletons. Particularly, in the case ofdental geminates, we hypothesize that longerconsonantal duration contributes to lingualfronting, which manifests itself as greater re-sistance to V-to-V coarticulation.1 IntroductionMaintaining voicing in obstruents is articulatorilychallenging. Sufficient transglottal air pressure dropis required to maintain voicing. This becomes harderfor obstruents, which require rapid increase in intrao-ral air pressure. The difficulty increases in geminateobstruents as they have longer closure. (Ohala, 1983)Hence, crosslinguistically, voiced geminates are rarerthan their voiceless counterparts (Hayes and Steri-ade, 2004). Bangla, an Eastern Indo-Aryan languagespoken in Bangladesh and the Indian states of WestBengal, Tripura and Assam, and the fifth most spo-ken language in the world1 with nearly 300 million1https://en.wikipedia.org/wiki/Bengali languagespeakers is one of the few languages which have bothvoiced and voiceless geminates in their inventory.Work on gemination in Bangla has been sparse.The only existing study on Bangla geminates focuseson voiceless stops and does not take voiced geminatesinto account (Lahiri and Hankamer, 1988; Hankameret al., 1989). This paper aims to fill that gap by tak-ing voiced geminates and the effects they might haveon acoustic cues into consideration. It also takes alook at how vowel to vowel coarticulation might beaffected because of gemination, especially in the caseof dental stops. The articulatory motivation is thatthere should be some lingual fronting in the case ofdental geminates, in order to maintain air pressure fora longer duration, and this fronting should then posehigher resistance to V-to-V coarticulation.The rest of the paper is organised as follows: InSection 2, we cover a brief overview of work doneon geminates and outline the aims and motivations ofthe current study. Section 3 presents the materials andmethods used for the study, while Section 4 discussesthe parameters measured in the study. Section 5 elu-cidates the statistical analysis employed for drawinginferences from the data. Section 6 deals with the F2locus equations fitted for investigating the variationof the degree of coarticulatory resistance. Section 7collates the results, and Section 8 concludes the workwith a discussion on the inferences drawn from anal-ysis and explores issues</s>
|
<s>that need further work.2 Related workLahiri and Hankamer (1988) investigated variousacoustic cues and their perceptual relevance on thedistinction between geminates and non-geminates inBangla and Turkish. Their study, which focused onlyon voiceless stops, confirmed that closure durationwas a perceptually salient cue for distinguishing be-tween geminates and singletons in Bangla and Turk-ish, but discounted the fact that V1 duration was a sec-ondary cue that could be used to identify geminates.In a subsequent paper (Hankamer et al., 1989), theyconcurred that secondary cues do contribute informa-tion about the distinction between geminates and sin-gletons when the primary cues are ambiguous. How-ever, they never explicitly stated exactly what these413secondary acoustic cues might be, proposing insteadthat the set of secondary features that biased subjectswhen the primary cue of consonant duration was am-biguous, was possibly due to a combination of cues,each by itself too subtle for their measurements to de-tect.The cues that have been found to be consistentlysignificant in the geminate-singleton distinction in-clude stop closure duration and the length of the pre-ceding vowel (V1 duration). Geminates have beenfound to have longer stop closure duration and shorterV1 duration (Esposito and Di Benedetto, 1999;Stevens and Hajek, 2004). Shortening of V2 dura-tion in the case of geminates has also been observed.However, it has not been significantly different fromthat of singletons (Esposito and Di Benedetto, 1999).Voicing has also been observed to have an effect onstop closure duration in obstruents. Stevens and Ha-jek (2004) found voiceless geminates to be substan-tially longer than voiced ones in Sienese Italian.Studies of word-initial voiceless geminates in lan-guages like Pattani Malay (Abramson, 1986; Abram-son, 1987; Abramson, 1992; Abramson, 1999) andKelantan Malay (Hamzah, 2013; Hamzah et al.,2013; Hamzah et al., 2012) show that when listen-ers do not have the advantage of relying on consonantduration as an acoustic cue for gemination, they makeuse of secondary cues which help in disambiguation.In particular, amplitude and fundamental frequency(Abramson, 1992; Abramson, 1999) were found tobe significant cues to word-initial consonant length.Stevens and Hajek (2004) found that long voicedstops are often partially devoiced. Moreover, he re-vealed that in the case of Sienese Italian, voicelessgeminates were preaspirated and their voiced coun-terparts were devoiced, so that the phonetic contrastbetween voiced and voiceless stops no longer man-ifested in a difference in the presence or absence ofclosure voicing, but rather in the absence or presenceof aspiration before the geminated consonant.F2 locus equations, which are a source of relationalinvariance that determine place of articulation, arealso signifiers of the degree of coarticulatory resis-tance for a given consonant. Fowler and Brancazio(2000) showed that the more highly resistant a conso-nant is, the lower its locus equation slope. High coar-ticulation resistance, measured as a low standard de-viation of F2 at the consonant normalized by variabil-ity of F2 across vowels, directly leads to a low locusequation slope. Low coarticulation resistance would,by the same logic, directly lead to a locus equationslope close to 1, since the variability of F2 measuredat consonant release would be nearly the variabilitymeasured at the midpoint of the vowel.(Iskarous etal., 2010)Esposito and Di Benedetto (1999) observed thatVLSupottOkaFigure 1: Waveform display of the</s>
|
<s>word /upot”:Oka/(‘valley’)V1 formant frequencies showed no relationship withgemination, suggesting that this showed that no ex-tra vocal effort was required for articulating gemi-nates. They made no comment about formant tran-sitions, however, or cast any light on the question ofthe variance of coarticulatory resistance in the pres-ence of geminates.In a study of the effect of consonant duration onV-to-V coarticulation in Japanese, Löfqvist (2009)found no significant effect of consonant closure dura-tion on the degree of vowel-to-vowel coarticulation.The aim of this paper is to answer the followingquestions:• What are the major acoustic cues that set voicedgeminates apart from voiceless ones?• Are the secondary cues employed by word-medial geminates identical to those employedby word-initial geminates (in languages that dohave them)?• If voicing does indeed shorten the duration ofstop closure, and gemination leads to partial de-voicing (Stevens and Hajek, 2004), how arevoiced geminates differentiated from voicelesssingletons?• Does gemination affect V-to-V coarticulation?3 Materials and methodsThe Shruti corpus, a Bangla corpus of read speech,built and maintained by Indian Institute Of Technol-ogy, Kharagpur (IITKGP) was used as the sourceof data for this study. The corpus consists of 7383unique sentences. There are 34 speakers with agesvarying from 20 to 40 years. 26 of the speakers aremale, and 8 female. The speaker age in the corpusvaries from 20 to 40 yrs. 700 words containing gemi-nate and singleton stop consonant sequences from the414Shruti corpus were hand annotated for word, placeand manner of articulation, consonant burst, and pre-ceding and following vowels using Praat 2. Each an-notation file had six tiers of information: the vowelpreceding the consonant, V1, the place of articula-tion of the stop, POA, the manner of articulation ofthe stop, MOA, the release burst, the following vowel,V2 and the word in which the geminate/singletons se-quence appears.The distribution of the data is given in Table 1.Singleton GeminatePlace of Articulation Voiced Voiceless Voiced VoicelessBilabial 32 20 44 10Dental 44 149 36 165Retroflex — 48 18 25Velar 11 59 17 20Table 1: Distribution of singleton and geminate to-kens across places of articulationThere are no voiced retroflex singleton tokens be-cause the voiced retroflex ã does not appear as a sin-gleton word-medially in Bangla.4 MeasurementsThe following parameters were examined:• Consonant duration• Duration of the preceding vowel (V1)• Duration of the following vowel (V2)• Duration of the stop release burst• RMS amplitude of the stop release burst• Fundamental frequency of the preceding vowel• Fundemental frequency of the following vowelThese measurements were taken for both voicedand voiceless geminates as well as voiced and voice-less singletons corresponding to four places of articu-lation: bilabial, dental, retroflex and velar.5 AnalysisThree-way ANOVAs were done to test the signifi-cance of place of articulation, voicing and gemina-tion on the duration of stop closure and length of thepreceding and following vowels (V1 and V2). Theinteraction effects of all three factors on each of theparameters are given in Figures 2 and 3. In carryingout all of the following tests, the probability of type Ierror, α was fixed at 0.01 (1%).2Boersma, Paul & Weenink, David (2015). Praat: doing pho-netics by computer [Computer program]. Version 5.4.08, re-trieved 24 March</s>
|
<s>2015 from http://www.praat.org/120160Voiced VoicelessSingletons120160Voiced VoicelessPOABilabialDentalRetroflexVelarGeminatesInteraction effect of POA, voicing and gemination on consonant durationFigure 2: Effect of place of articulation, voicing andgemination on consonant durationThe ANOVA showed that gemination[F(1,806)=1146.749, p<0.0001], place of artic-ulation [F(1,806)=6.242, p<0.001] and voicing[F(1,806)= 43.146, p<0.001] are all significantcontributors towards variation of consonant duration.In the case of V1 duration, gemination[F(1,810)=21.86, p<0.01], place of articula-tion [F(1,810)=24.109, p<0.00001], and voicing[F(1,810)=77.33, p<0.00001] are all highly signifi-cant factors.100110Voiced VoicelessSingletons100110Voiced VoicelessPOABilabialDentalRetroflexVelarGeminatesInteraction effect of POA, voicing and gemination on V1 DurationFigure 3: Effect of place of articulation, voicing andgemination on duration of preceding vowelA three-way ANOVA carried out on V2 durationrevealed that none of the factors appear to have anyeffect whatsoever of the duration of the V2.Two-way ANOVAs were carried out to test thesignificance of voicing and gemination on durationand amplitude of the stop release burst, as wellas the fundamental frequency (F0) of both the pre-ceding and the following vowels. The effects ofgemination [F(1,817)=19.102, p<0.01] and voicing[F(1,817)=71.13, p<0.0001] were found to be ex-tremely significant contributors towards the varianceof burst duration.It was also found that voicing [F(1,817)=93.661,p< 0.0001] plays a very significant role in determin-ing the burst amplitude.The effect of gemination [F(1,817)=10.273,p<0.01] was found to be significant on the variationexhibited by F0 values of the preceding vowel.41512.515.017.520.0Voiced VoicelessGeminationSingletonGeminateVariation of burst duration with voicing and geminationFigure 4: Effect of voicing and gemination on dura-tion of stop release burst0.0300.0350.0400.0450.0500.055Voiced VoicelesslitGeminationSingletonGeminateVariation of burst amplitude with voicing and geminationFigure 5: Effect of voicing and gemination on ampli-tude of stop release burst170180190200210220Voiced VoicelessGeminationSingletonGeminateVariation F0 of V1 for singletons and geminatesFigure 6: Effect of voicing and gemination on funda-mental frequency of the preceding vowelVoicing [F(1,817)=4.613, p>0.01] did not appear tohave any effects on V1 duration.Much like the preceding vowel, F0 of the fol-lowing vowel is affected significantly by gemina-tion [F(1,817)=13.860, p<0.001] but not by voicing[F(1,817)=5.149, p>0.01].170180190200210220Voiced VoicelessGeminationSingletonGeminateVariation of F0 of V2 for singletons and geminatesFigure 7: Effect of voicing and gemination on funda-mental frequency of the following vowel6 F2 Locus Equations12001500180021001200 1600 2000F2 at V (Hz)LinesGeminateSingletonFigure 8: F2 Locus Equations for voiceless dentalgeminate and singletonFor modelling the variation of the degree of coartic-ulatory resistance of obstruents with gemination, lo-cus equations of F2onset against F2mid of the vowelfollowing the obstruent were plotted for both sin-gleton and geminate tokens. Due to the sparsity ofboth inter and intra-speaker data, tokens from a sin-gle speaker, “Chan”, was chosen for the study. Thepreceding vowel was fixed as the high front vowel/i/ and the obstruent was fixed as the voiceless dental/t”/. 47 tokens were analysed, 18 of which were gem-inates and the others singletons. The unequal samplesize is due to the fact that the data was taken frompre-existing corpus, and was thus subject to corpus-specific idiosyncrasies.The second formant measures for the followingvowel (V2) were taken as described in the previouschapter, with two measures for each formant - oneat the 5% mark of the vowel for F2onset and anotherat the 50% mark of the vowel for F2mid. The val-ues were plotted and a regression line fitted through416the data. The regression equation relating F2onset andF2mid is given by:F2onset</s>
|
<s>= β + αF2midwhere α is the slope of the regression line, and β isthe intercept.The fitted regression lines for singletons and gem-inates (Figure 8) show no significant differences intheir slopes. This implies that, at least for thegiven vocalic and consonantal environment, gemina-tion does not affect the degree of coarticulation. How-ever, more data from other speakers including dif-ferent preceding vowels and obstruents is required todraw stronger conclusions in this regard.7 ResultsF0 of surrounding vowelsBurst Amplitude High LowHigh Voiced Geminate Voiced SingletonLow Voiceless Geminate Voiceless SingletonTable 2: The roles of fundamental frequency andburst amplitude in the voiced-voiceless geminate-singleton distinctionVoicing has a significant effect on consonant du-ration, with both voiced singletons and voiced gem-inates having significantly shorter closure durationsthan their voiceless counterparts.The duration of the preceding vowel is strongly af-fected by gemination. Geminates have shorter V1duration. V1 duration is significantly affected byvoicing as well — voiced obstruents are preceded bylonger vowels. V1 duration is also contingent on theplace of articulation, and displays interaction effectsof place of articulation with both voicing and gemi-nation.Both gemination and voicing affect the duration ofthe stop release burst - burst duration is greater forgeminates. Burst amplitude is significantly affectedby voicing but not gemination — voiced stops havegreater burst amplitude.Fundamental frequencies of both vowels are af-fected by gemination. Vowels flanking geminateshave higher fundamental frequency than those flank-ing singletons. Table 2 shows the variation of burstamplitude and fundamental frequency of surroundingvowels with voicing and gemination. Together, thesetwo cues help in disambiguating voiced and voicelesssingletons and geminates.The F2 locus equations plotted for voiceless dentalgeminates and singletons with V1 fixed as /i/ showno appreciable difference in their slopes, indicatingthat geminates do not, after all, affect V-to-V coartic-ulation, at least as far as this particular vocalic andconsonantal context is concerned.8 Conclusion and further workThe perceptual experiments on voiceless geminatestops carried out by (Lahiri and Hankamer, 1988) and(Hankamer et al., 1989) focused exclusively on dura-tional correlates. The presence of secondary acousticcues, while acknowledged by them, were not creditedwith enough perceptual significance other than serv-ing to bias listeners only when the primary cue of stopclosure duration was ambiguous. However, the issueof voicing, the consequent perturbation of closure du-ration and its effects on the perception of geminateswas left unaddressed in their work.The results of this study clearly show that burstamplitude and fundamental frequency, both non-durational correlates, are significant contributors to-wards the distinction between voiced and voicelessgeminates. Burst amplitude serves to disambiguatebetween voiced and voiceless stops, with the for-mer having much greater burst amplitude than thelatter. Fundamental frequency, on the other hand,is a key parameter in distinguishing geminates fromnon-geminates - F0 of vowels surrounding geminateconsonants is consistently higher than for singletons.Burst duration is also a significant indicator for bothgemination and voicing, with voiceless obstruents andgeminates having longer burst duration than voicedstops and singletons respectively.These results corroborate the findings of (Abram-son, 1992; Abramson, 1999) and (Hamzah et al.,2012; Hamzah et al., 2013) regarding word-initialgeminates in Pattani Malay and Kelantan Malay re-spectively. Thus, we can safely conclude that ampli-tude and F0 are powerful secondary cues that affectboth word-initial and</s>
|
<s>word-medial geminates.We can also conclude that despite voicing shorten-ing the duration of stop closure, geminates and sin-gletons are still distinguishable by the fundamentalfrequencies of the surrounding vowels, which are sig-nificantly higher in the case of geminates.From the preliminary study of the locus equationsof the second formant of the following vowel, no dif-ference was found between the slopes of the equationsfor geminates and singletons, indicating that gemi-nates do no affect V-to-V coarticulation appreciably,at least as far as the voiceless dental /t”/ is concerned.However,more data for other places of articulationand different vocalic contexts is required in this re-gard to draw a stronger conclusion.One of the avenues of investigation that was leftunexplored in this study, and which definitely meritsfurther research, is a comparison of the degree of de-voicing (a phenomenon noted in Sienese Italian andJapanese by Stevens and Hajek (2004) and Kawahara(2005) respectively) that occurs in voiced geminatesand singletons in Bangla.417ReferencesArthur S Abramson. 1986. The perception of word-initialconsonant length: Pattani malay. Journal of the Inter-national Phonetic Association, 16(01):8–16.Arthur S Abramson. 1987. Word-initial consonant lengthin pattani malay. Haskins Laboratories Status Reporton Speech Research, pages 143–147.Arthur Abramson. 1992. Amplitude as a cue to word-initial consonant length: Pattani malay. Haskins Labo-ratories Status Report on Speech Research, pages 251–254.Arthur S Abramson. 1999. Fundamental frequency as acue to word-initial consonant length: Pattani malay. InProceedings of the 14th International Congress of Pho-netic Sciences, pages 591–594.Anna Esposito and Maria Gabriella Di Benedetto. 1999.Acoustical and perceptual study of gemination in italianstops. The Journal of the Acoustical Society of America,106(4):2051–2062.Carol A Fowler and Lawrence Brancazio. 2000. Coarticu-lation resistance of american english consonants and itseffects on transconsonantal vowel-to-vowel coarticula-tion. Language and Speech, 43(1):1–41.Hilmi Hamzah, Janet Fletcher, and John Hajek. 2012. Anacoustic analysis of release burst amplitude in the kelan-tan malay singleton/geminate stop contrast. In Proceed-ings of the 14th Australasian International Conferenceon Speech Science and Technology, pages 85–88.Hilmi Hamzah, Janet Fletcher, and John Hajek. 2013.Amplitude and f0 as acoustic correlates of kelantanmalay word-initial geminates.Hilmi Hamzah. 2013. The acoustics and perception ofthe word-initial singleton/geminate contrast in KelantanMalay. Ph.D. thesis.Jorge Hankamer, Aditi Lahiri, and Jacques Koreman.1989. Perception of consonant length: Voiceless stopsin turkish and bengali. Journal of Phonetics, 17(4):283–298.Bruce Hayes and Donca Steriade. 2004. The pho-netic bases of phonological markedness. In B. Hayes,R. Kirchner, and D. Steriade, editors, PhoneticallyBased Phonology. Cambridge University Press.Khalil Iskarous, Carol A Fowler, and Douglas H Whalen.2010. Locus equations are an acoustic expression of ar-ticulator synergy. The Journal of the Acoustical Societyof America, 128(4):2021–2032.Shigeto Kawahara. 2005. Voicing and geminacy inJapanese: An acoustic and perceptual study. In K. Flackand S. Kawahara, editors, University of MassachusettsOccasional Papers in Linguistics 31: Papers in Experi-mental Phonetics and Phonology, pages 87–120.A. Lahiri and J Hankamer. 1988. The timing of geminateconsonants. Journal of Phonetics, 16:327–338.Anders Löfqvist. 2009. Vowel-to-vowel coarticulation injapanese: the effect of consonant duration. The Journalof the Acoustical Society of America, 125(2):636–639.John J Ohala. 1983. The origin of sound patterns in vocaltract constraints. In The production of speech, pages189–216. Springer.Mary Stevens and John Hajek. 2004. Comparing voicedand voiceless geminates in sienese italian: what roledoes preaspiration play? In</s>
|
<s>Proceedings of the 10thAustralian International Conference on Speech Scienceand Technology. Sydney: S, volume 340, pages 340–345.418</s>
|
<s>A Machine Learning Approach to Automating Bengali Voice Based Gender ClassificationProceedings of the SMART–2019, IEEE Conference ID: 46866 8th International Conference on System Modeling & Advancement in Research Trends, 22nd–23rd November, 2019 College of Computing Sciences & Information Technology, Teerthanker Mahaveer University, Moradabad, IndiaCopyright © IEEE–2019 ISBN: 978-1-7281-3245-7 55Abstract—Does thicker vocal folds produce sounds with longer wavelength? And can they produce higher pitches to human ears? We address these types of questions and try to identify the difference between male and female voices. By using Machine learning algorithm it’s possible to identify the gender from voices. And for that we extract voice signal’s MFCCs features by calculating Discrete Fourier Transform, Mel-spaced filter-bank and log filter-bank energies. Identify gender from natural voice can be one of the most important part of voice recognition. In normal voice to text conversion it’s not important to detect the voices gender. But when we use this voice recognition for real life applications, it will be densely needed to identify the voices gender. Gender identifying from voice is a field of Natural language processing which is a branch of artificial intelligence. We followed a simple working sequence for getting the ultimate result. The sequence is, Input-audio-file, Pre-works, Feature Extraction, Creating CSV file with features, Train the model and finally test with test data. For feature extraction we used Mel-frequency cepstral coefficient (MFCC). And for mapping and selection we used Logistic Regression, Random Forest and Gradient Boosting. After all this work we get 99.13% accuracy on the dataset that containing 1652 data of more than 250 speakers and tested them with 400 male and 400 female voices.Keywords: Gender identification, Feature extraction, Voice to gender, Bangla voice gender, MFCCsI. IntroductionVoice recognition is one of the most mooted topics of NLP. Speech is the medium of communication and human interaction. Speech is created by biological mechanism using several body parts. Human brain can automatically identify the gender difference hearing a person’s voice, but computer cannot. A person’s gender is essential for the interactions of social community and computers. Recently technology requires automatic gender classification which is nowadays playing a vital role in many ways. Most of these voice detection systems detect voice by reading word sequencing. This research makes a voice recognition application based on wave frequency of a person voice. This application can automatically detect the gender of a human using Bangla language. There are many efficient uses of gender detection through voice. Some of them are described below:In crime detection, it will be helpful. People commit different types of crimes through phone calls or voice messages. Some- times Criminals hide their identity deliberately. The national security force can surveillance the criminals through this system. Thus, these types of crimes can be solved by categorizing genders through voice [1]. Demographic Investigation can be another use of gender’s identifier. A nations demographic or census information can be automatically identified through human voice. Demographic statistical information such as gender, disability status, education status etc. can be collected by this type</s>
|
<s>of application [2]. For Commercial Betterment we can use it. Gender detection is nowadays useful for guiding digital marketing and also smart shopping which creates initiation of new smart websites, online marketing and digital advertising etc. Thus, knowing the number of male and female customers would help building more effective commercial transaction [2]. In a mobile healthcare system or an online healthcare system this application can play a vital role. It would be easier for the healthcare professional to prescribe for the patient more accurately. There are also some vocal folds pathologist which are biased to a specific gender Such as vocal folds cyst which is found only in female patients [3].A Machine Learning Approach to Automating Bengali Voice Based Gender ClassificationS.M. Saiful Islam Badhon1, Md. Habibur Rahaman2, and Farea Rehnuma Rupon31Dept. of CSE, Daffodil International University,Dhaka, Bangladesh 2Dept. of CSE, Daffodil International University,Dhaka, Bangladesh 3Dept. of CSE, Daffodil International University,Dhaka, Bangladesh E-mail: 11saiful15-7878@diu.edu.bd, 2habibur15-7761@diu.edu.bd, 3farea15-7707@diu.edu.bdAuthorized licensed use limited to: University of Exeter. Downloaded on June 17,2020 at 08:52:29 UTC from IEEE Xplore. Restrictions apply. 8th International Conference on System Modeling & Advancement in Research Trends, 22nd–23rd November, 2019 College of Computing Sciences & Information Technology, Teerthanker Mahaveer University, Moradabad, India56 Copyright © IEEE–2019 ISBN: 978-1-7281-3245-7II. Related WorkFrom the beginning of research science, several projects have been invented to recognize gender from a speech. So, detecting gender is not a new work in this decade. Some works have been done to recognize gender from a human voice. A new system Using Bootstrapping on audio classification is introduced in where the system identify gender from speech [4]. It shows more than 90% performance for k Nearest Neighbors, Neural Network, Naive Bayes, Logistic Regression, Decision Trees (C4.5) and Support Vector Machine (SVM) Classifiers.They tried to present a mixture of Piece wise GMM and neural networks [5]. Which is Content based multimedia indexing segments and every segments duration being 1 second. It outputted 90% accurate result for every language and channel. Another system found which also provide 90% accurate result and they also classified the voice by using multimedia indexing of voices channel [6]. A support vector machine is applied on discriminative weight training in to identify gender [7]. This support vector machine (SVM) consist an optimal weighted Mel frequency cepstral co-efficient (MFCC) which based on MCE (Minimum Classification Error) and generates a gender decision rule. It introduced another method in which provides almost 100% accuracy [8]. A system introduced in where Gaussian Mixture Model is used for two stage classifiers for high accuracy and low complexity [9]. It shows more than 95% accuracy rate. In 1992 Konig and Morgan worked with Linear Prediction coding Coefficients. They extracted 12 LPC using a Multi-layer Perceptron’s classifier and energy features in every 500 milliseconds. Based on DARPA resource management database it shows 84% accuracy where the database contains a clean speech of around 160 speakers in English (US). HMM- Hidden Markov Models used to identify gender from a speech where the engine is trained with</s>
|
<s>one Hidden Markov Model speech to recognize each gender. This model is used to decode a signal from test speech. Parries and Carey in 1996 combined pitch and Hidden Markov Model to identify gender from a speech which shows more than 97% accuracy. The experimented on some sentences of 5 seconds from the database of OGI. In 1997, using GMM, Slomkaand Sridharan combined a general audio classifier and pitch-based approach. After remove the silence on OGI and based on 7 seconds speech the system reported 94% accuracy. Using MFCC and GMM as a classifier Tzanetakis and Cook (2002) applied to identify gender in a multimedia indexing context and it shows 74% accuracy. It is seeming that most of the gender detection research works with foreign language. In Bangla language there is not much research have done to recognize gender from a speech. In the there is a system which introduced us to detect gender from Bengali speech which extract their features using Fast Fourier Transform [10]. It also provides a low accuracy around 80%.III. MethodologyFig. 1: Workflow of the WorkBangla videos and through mobile recording system. All the speakers are native speakers of Bangla and they are the citizen of Bangladesh. The students of Daffodil International University helped a lot to collect these data. The average age of those speakers is 20-50. All the data collected from different location of Bangladesh. As speech are collected from call recording, YouTube and many other ways that’s why speech is gathered from different location. Another thing is all those speeches is in standard Bangla. Speeches are recorded through a mobile recording application. For this, we didn’t use any special room or any kind of special accessories which the data will be smooth or free from noise. Speeches are terminated by the software named Filmora. Additionally, some online platform which provides the option to terminate a large voice such as audiotrimmer.com and mp3cut.net. Most of the voice is in 128kbps and we didn’t use any filter over the speech. All the speeches are in mostly 4-7 seconds. And we collected exactly 1652 (female voice=821 and male voice=831) voices from more than 250 people. In fig. 2 we compare the ratio between male and female voices number.It’s possible to classify lots of things from voice using artificial intelligence. And when we start work with natural voices which belongs to natural language processing (NLP) we found that we need to find out the features of voices. And for detecting male voices and female voices it’s important to find out those features which can detect the differences between male and female voices. So, we planned for reaching our goal which is given at fig. 1.First of all, we collected the voices for creating the data set.A. DatasetSpeeches are collected from different sources. Some of them collected via a google form, audio call recording, YouTubeFig. 2: Ratio of Male and Female VoiceAuthorized licensed use limited to: University of Exeter. Downloaded on June 17,2020 at 08:52:29 UTC from IEEE</s>
|
<s>Xplore. Restrictions apply. A Machine Learning Approach to Automating Bengali Voice Based Gender ClassificationCopyright © IEEE–2019 ISBN: 978-1-7281-3245-7 57III. MethodologyFig. 1: Workflow of the WorkBangla videos and through mobile recording system. All the speakers are native speakers of Bangla and they are the citizen of Bangladesh. The students of Daffodil International University helped a lot to collect these data. The average age of those speakers is 20-50. All the data collected from different location of Bangladesh. As speech are collected from call recording, YouTube and many other ways that’s why speech is gathered from different location. Another thing is all those speeches is in standard Bangla. Speeches are recorded through a mobile recording application. For this, we didn’t use any special room or any kind of special accessories which the data will be smooth or free from noise. Speeches are terminated by the software named Filmora. Additionally, some online platform which provides the option to terminate a large voice such as audiotrimmer.com and mp3cut.net. Most of the voice is in 128kbps and we didn’t use any filter over the speech. All the speeches are in mostly 4-7 seconds. And we collected exactly 1652 (female voice=821 and male voice=831) voices from more than 250 people. In fig. 2 we compare the ratio between male and female voices number.It’s possible to classify lots of things from voice using artificial intelligence. And when we start work with natural voices which belongs to natural language processing (NLP) we found that we need to find out the features of voices. And for detecting male voices and female voices it’s important to find out those features which can detect the differences between male and female voices. So, we planned for reaching our goal which is given at fig. 1.First of all, we collected the voices for creating the data set.A. DatasetSpeeches are collected from different sources. Some of them collected via a google form, audio call recording, YouTubeFig. 2: Ratio of Male and Female VoiceAmong 6 steps of fig. 1 features extraction part was most crucial and time consuming. It took almost 90% of our whole work time in data processing and features extraction. With 26 features of a voice we tried to identify male or female voice. Some of them are described below.B. Feature Extraction1). Zero Crossing RateThis is the feature which records the changes of sign in a voice signal according to time. As we know in male voice is broader than female voice. We can find out this in this feature.In fig 3 and 4 we took male and female voice of same Bengali sentence, though male and female both said the same sentence, there are clear differences found in their voice.Fig. 3: Zero crossing rate of male voiceFig. 4: Zero Crossing Rate of Female Voice2). Spectral CentroidThis feature finds out the center of mass for a sound. More clearly it is finding out the loudness of a voice. So, depending on the loudness we tried to specify the voices, below figures (fig</s>
|
<s>5 and 6) will help us to specify the voices with the help of spectral centroid.Fig. 5: Spectral Centroid of Male VoiceFig. 6: Spectral Centroid of Female Voice3). Chroma FeatureChroma feature predominantly focus on tonal part of audio signal. It helps to recognize cords or finding harmonic similarities between audio signals [15]. Normally male voices are lower than female because of longer and thicker vocal folds which produce sounds with longer wavelengths which our ear identify as lower pitches. And all these varieties of vocal folds happen because of testosterone which is a male sex hormone [16]. So, our dataset shows also some verities of Chroma features between male and female voices which is shown in fig 7.Fig. 7: Boxplot Presentation of Chroma Feature4). Spectral BandwidthThe difference between higher and lower point of a continuous frequencies is called Bandwidth. It is measured by Hertz. It may also refer to passband bandwidth or baseband bandwidth. The difference between upper andFig. 8: Baseband Bandwidthlower cutoff frequencies are called Passband Bandwidth, above figure (fig. 8) make a clear understanding of this. Such as a communication channel, a signal spectrum etc. On the other hand, the bandwidth which is equal to its upper cutoff frequency is called Baseband Bandwidth. It is applied on low pass filter or baseband signal. The boxplot presentation of spectral bandwidth according to our data given at fig. 9.Authorized licensed use limited to: University of Exeter. Downloaded on June 17,2020 at 08:52:29 UTC from IEEE Xplore. Restrictions apply. 8th International Conference on System Modeling & Advancement in Research Trends, 22nd–23rd November, 2019 College of Computing Sciences & Information Technology, Teerthanker Mahaveer University, Moradabad, India58 Copyright © IEEE–2019 ISBN: 978-1-7281-3245-7Fig. 9: Boxplot Presentation of Spectral Bandwidth5). RolloffExplicitly roll-off alludes to the activity of a kind of channel, one intended to roll-off frequencies above or beneath a specific point. It is known as a roll-off in light of the fact that the procedure is continuous. Hi-pass and low pass channels both roll-off frequencies outside of their range, however, they don’t promptly wipe out all frequencies outside their range. The sound is tenderly (or not all that delicately)” roll-off” with frequencies further above or beneath the cutoff recurrence ending up increasingly lessened. Roll off steepness is commonly expressed in dB per Octave, with higher numbers demonstrating a more extreme channel. 24 dB/Octave is more extreme than 12 dB/Octave and fig. 10 is illustrating boxplot of roll off.Fig. 10: Boxplot Presentation of Roll off6). MFCCS FeaturesThis is one of the most popular and effective method of feature extraction. The full form of MFCCs is Mel frequency cepstral coefficients. The sounds of human are filtered by the shape of the vocal tract including tongue, teeth etc. and the shape define what is the sound [11]. So, if it’s possible to determine the accurate shape, it will be easy to work with human voices. For finding MFCCs we need to follow some steps those are given at fig 11 [11].Fig. 11: Steps of MFCCsFor detecting pitch</s>
|
<s>in linear manner so that system can understand those pitches of sound we need to use mel-scale [12]. The formula of converting normal frequency to mel-scale is given below:Pi(k) = × |Si(K)| 𝑀𝑀(𝑓𝑓) = 1125 × 𝑙𝑙𝑙𝑙(1 + 𝑓𝑓/700) 𝑀𝑀−1(𝑚𝑚) = 700 × (𝑒𝑒𝑒𝑒𝑒𝑒(𝑚𝑚/1125 − 1)) (1)Pi(k) = × |Si(K)| 𝑀𝑀(𝑓𝑓) = 1125 × 𝑙𝑙𝑙𝑙(1 + 𝑓𝑓/700) 𝑀𝑀−1(𝑚𝑚) = 700 × (𝑒𝑒𝑒𝑒𝑒𝑒(𝑚𝑚/1125 − 1)) (2)Equation:1 is the formula of converting frequency to mel scale and equation:2 is the formula of converting mel scale to the frequency. After all of these a simple description of implementing steps are given below: • First of all, its important to cut those voice signals into small frames and 20ms to 40ms is good but 25ms is the standard size for framing. So, if we have 32khz signal, we will get 0.025*32000=800 samples. Next steps will apply on every single frame that we got. • This step will focus on Discrete Fourier Transform of the frames. For that we need to follow an equation. here, 1 ≤ k ≤ K (3)Here, by calculating DFT we find Si(k), i denotes the number of frames, Pi(k) is the power spectrum of i number frame. Si(n) is the time domain frame by frame, K is the length of a frame. And we can extract the power spectrum of Si(n) as below:Pi(k) = × |Si(K)| 𝑀𝑀(𝑓𝑓) = 1125 × 𝑙𝑙𝑙𝑙(1 + 𝑓𝑓/700) 𝑀𝑀−1(𝑚𝑚) = 700 × (𝑒𝑒𝑒𝑒𝑒𝑒(𝑚𝑚/1125 − 1)) (4) • Computing Mel-spaced filter-bank is the main concern of this step. Mel-filter-bank is basically set of 20-40 filters. And the filters are those which we apply to the periodogram power spectral in previous step. • We need to find out the log of every energy from previous step which provide us the log filter-bank energies. • Finally, by transforming those filter-bank energies into discrete cosine we will get cepstral coefficients of those energies.And finally, we extract 20 features of MFCC that shows as below (fig 12 and 13).Authorized licensed use limited to: University of Exeter. Downloaded on June 17,2020 at 08:52:29 UTC from IEEE Xplore. Restrictions apply. A Machine Learning Approach to Automating Bengali Voice Based Gender ClassificationCopyright © IEEE–2019 ISBN: 978-1-7281-3245-7 59Fig. 12: MFCCs for maleFig. 13: MFCCs for femaleSo finally, we collected 26 features of a voice and those are: chroma feature, Root Mean Square Error, spectral centroid, spectral bandwidth, roll off, zero crossing rate and 20 features from MFCCS. And the heat map of fig 14 shows the correlations between all the features that we worked with. For clear understanding of fig .14 use the link: https://github.com/SaifulBadhon/heatmap/blob/master/heatmap.pngFig. 14: Heat-mapIV. Experiments and ResultsA. Experiment Setupwe tested our system in two ways. One is splitting the data in 8:2 train test data and another one is we input random voices which is not trained before and we matched the outcome with actual outcome. We tried some machine learning algorithm for this prediction Logistic regression, Random forest and Gradient Boosting showed better result among all of them. And gradient</s>
|
<s>boosting was the best. We took help of confusion matrix for finding accuracy of randomly tested voices.Confusion matrices of different models are plotted below (fig 15,16 and 17)Fig. 15: Heat Map of Gradient Boosting Confusion MatrixFig. 16: Heat Map of Random Forest Confusion MatrixFig. 17: Heat Map of Logistic Regression Confusion MatrixAuthorized licensed use limited to: University of Exeter. Downloaded on June 17,2020 at 08:52:29 UTC from IEEE Xplore. Restrictions apply. 8th International Conference on System Modeling & Advancement in Research Trends, 22nd–23rd November, 2019 College of Computing Sciences & Information Technology, Teerthanker Mahaveer University, Moradabad, India60 Copyright © IEEE–2019 ISBN: 978-1-7281-3245-7Fig. 18: Workflow of Detecting ResultAfter completing the training of machine, we need to test it. And for getting result we followed a workflow which is given at fig 18.B. ResultAfter training the model we get highest accuracy in Gradient Boosting algorithm and that is 99.13%. For that we test with 400 male and 400 female voices for different algorithm those are given below:Table 1: Performance with Gradient Boosting AlgorithmPrecision Recall F1-Score Support AccuracyMale 0.99 1.00 0.99 400 0.9913Female 1.00 0.98 0.99 400 0.9913Micro Avg 0.99 0.99 0.99 800 0.9913Macro Avg 0.99 0.99 0.99 800 0.9913Weighted Avg 0.99 0.99 0.99 800 0.9913So, we can say that gradient boosting will be the best choice for us cause with this algorithm we get the best possibleTable 2: Performance with Random Forest AlgorithmPrecision Recall f1-score Support AccuracyMale 0.97 0.99 0.98 400 0.9825Female 0.99 0.97 0.98 400 0.9825Micro Avg 0.98 0.98 0.98 800 0.9825Macro Avg 0.98 0.98 0.98 800 0.9825Weighted Avg 0.98 0.98 0.98 800 0.9825Table 3: Performance with Logistic Regression AlgorithmPrecision Recall F1-Score Support AccuracyMale 0.91 0.93 0.92 400 0.9162Female 0.93 0.90 0.92 400 0.9162Micro Avg 0.92 0.92 0.92 800 0.9162Macro Avg 0.92 0.92 0.92 800 0.9162Weighted Avg 0.92 0.92 0.92 800 0.9162Table 4: Performance in Different ModelsModel Accuracy Error Rate Precision Recall F1-scoreGradientBoosting 99.13% .88 99 99 99RandomForest 98.25% .74 98 98 98LogisticRegression 91.62% .27 92 92 92accuracy which is 99.13% and lowest Error Rate which is 0.88%.V. Conclusion and Future WorkThis work tried to detect human gender from their voices in Bengali language. In near future there will be heavy use of voice-based application. By 2020 there will be 50% voice internet searching [13]. 100 million smartphone users will use voice assistant in 2020 [14]. Even in Bangladesh and native speaker of Bengali start using voice-based applications. For this upcoming future of voice recognition systems, it will be mandatory to detect voices gender. There are some research papers on gender detection in Bengali language with impressive accuracy but lake of verity on voices that’s mean lake of speakers. The more speaker we have the more verity we have in voices. Here this paper tried to work with more verity of voices. We had more than 250 speakers and exactly 1652 voices. And the accuracy was 99.13%.This paper didn’t work with third gender in future we want to work with third gender and, we want to improve our data sets verity.</s>
|
<s>We are focusing on verity of data set not in number. Cause in gender detection we need different type of speaker so that we get more and more verity of voices. Lots of voices of same speaker will not be helpful for gender detection.References[1] P. Gupta, S. Goel, A. Purwar, “A Stacked Technique for Gender Recognition Through Voice”, 2018 Eleventh International Conference on Contempo- rary Computing (IC3),2-4 Aug. 2018[2] F. Lin, Y. Wu, Y. Zhuang, X. Long, ‘Human Gen- der Classification: A Review’, 2015. [Online]. Avilable: https://www.researchgate.net/publication/280105452 [Accessed: 29-Aug- 2019][3] M. Alhussein, Z.Ali, M.Imran and W.Abdul, ’Automatic Gender Detection Based on Characteristics of Vocal Folds for Mobile Healthcare System’. Avilable:https://www.hindawi.com/journals/misy/2016/7805217/ [Accessed: 29- Aug- 2019]Authorized licensed use limited to: University of Exeter. Downloaded on June 17,2020 at 08:52:29 UTC from IEEE Xplore. Restrictions apply. A Machine Learning Approach to Automating Bengali Voice Based Gender ClassificationCopyright © IEEE–2019 ISBN: 978-1-7281-3245-7 61[4] G. Tzanetakis, Audio-based gender identification using bootstrapping, in Communications, Computers and signal Processing, 2005. PACRIM. 2005 IEEE Pacific Rim Conference on. IEEE, 2005, pp. 432433.[5] H. Harb and L. Chen, Voice-based gender identification in multimedia applications, Journal of intelligent information systems, vol. 24, no. 2-3, pp. 179198, 2005.[6] A general audio classifier based on human perception motivated model, Multimedia Tools and Applications, vol. 34, no. 3, pp. 375395, 2007.[7] S.-I. Kang and J.-H. Chang, Discriminative weight training-based opti- mally weighted mfcc for gender identification, IEICE Electronics Express, vol. 6, no. 19, pp. 13741379, 2009.[8] L. Kye-Hwan, K. Sang-Ick, K. Deok-Hwan, and J.-H. Chang, A support vector machine-based gender identification using speech signal, IEICE transactions on communications, vol. 91, no. 10, pp. 33263329, 2008.[9] Y. Hu, D. Wu, and A. Nucci, Pitch-based gender identification with two- stage classification, Security and Communication Networks, vol. 5, no. 2, pp. 211225, 2012.[10] M. S. Ali, M. S. Islam, and M. A. Hossain, Gender recognition system using speech signal,International Journal of Computer Science, Engineering and Information Technology, 2012.[11] J. Lyons, ‘Mel Frequency Cepstral Co- efficient (MFCC) tutorial’,2013. [Online]. Available:http://practicalcryptography.com/miscellaneous/machine-learning/guide-mel-frequency-cepstral-coefficients-mfccs/ [Accessed: 23- Aug- 2019].[12] ‘The mel frequency scale and coefficients’, 2013. [Online]. Available:http://kom.aau.dk/group/04gr742/pdf/MFCC worksheet.pdf [Accessed: 27- Aug- 2019].[13] R. Sentance, ’The future of voice search: 2020 and beyond’ , 2018.[On- line]. Avilable:https://econsultancy.com/the-future-of-voice-search-2020- and-beyond/ [Accessed: 29- Aug- 2019].[14] C.Ciligot, ‘7 Key Predictions For the Future of Voice Assistants and AI’ 2019. [Online]. Avilable:https://clearbridgemobile.com/7-key- predictions-for-the-future-of-voice-assistants-and-ai/ [Accessed: 29- Aug- 2019].[15] Kattel, M. & Nepal, Araju & Shah, Ayush & Shrestha, Dev, ‘Chroma Feature Extraction’, 2019[online]. Avilable: https://www.researchgate.net/publication[16] 330796993 Chroma Feature Extraction[Accessed:21-Sep-2019]. H. Reith ’Why are male and female voices distinctive?’, 2016 [online]. Avilable: https://www.quora.com/Why-are-male-and-female-voices- distinctive [Accessed: 21- Sep- 2019].Authorized licensed use limited to: University of Exeter. Downloaded on June 17,2020 at 08:52:29 UTC from IEEE Xplore. Restrictions apply.</s>
|
<s>Design of a voice controlled robotic gripper arm using neural networks978-1-5386-1887-5/17/$31.00 ©2017 IEEE Fariha Musharrat Haque Department of Mechanical Engineering Bangladesh University of Engineering and Technology Dhaka, Bangladesh farihahaque019@gmail.com Asif Shahriyar Sushmit Department of Electrical and Electronic Engineering Bangladesh University of Engineering and Technology Dhaka, Bangladesh sushmit0109@gmail.com M. A. Rashid Sarkar Department of Mechanical Engineering Bangladesh University of Engineering and Technology Dhaka, Bangladesh rashid@me.buet.ac.bd Abstract— The aim of this work is to propose a method to build an efficient Bangla voice controlled robotic gripper mechanism using Neural Networks. Robots are becoming an essential part of many industries and fields. Presently, various ways are used to control one. The most user friendly one of them is controlling it by voice commands. Though voice controlled robots are becoming a popular concept now, construction of Bangla voice controlled robots is still a new idea. Controlling the robot with voice commands along with visual feeds helps the robot to operate easily and more accurately. This robot consists of three modules: speech command recognition module, object classifier module and robotic gripper arm module. At first, the robot takes voice commands on which objects to grab and displace; then it finds the object using the object classifier module. And finally it grabs and displaces the object using the robotic gripper arm module. The speech recognition module and the object classifier module uses two distinct neural networks along with additional hardware to perform their tasks. This paper presents the design and fabrication process of the robot discussed so that robots can be made using this design that works under different situations. This robot can be used to perform tasks with a high efficiency on both industrial and domestic levels. Index Terms— Neural Networks, Robotic Gripper, Speech Recognition. I. INTRODUCTION peech recognition and voice recognition have gained momentum in the last few years thanks to neural networks and deep learning. Speech recognition deals with getting the information out of an audio file whereas voice recognition ensures receiving information as well as security since voice recognition can distinguish between audio samples from different users. This work focuses on speech recognition. The conventional speech recognition process is based on Hidden Markov Models(HMM). In this approach the input command is taken and then processed and verified against acoustic and language models. But at present since Deep Neural Nets are on the rise, using DNNs to do the pattern recognition is more convenient. Interested reader may check the work Fayek, Lech and Cavedon [1] where frame based formulation to speech emotion recognition was described using deep learning. An object classifier is used to take the visual feed and relate to the voice commands. And then a Robotic Gripper Module is used to pick up the target object. To perform the last part with accuracy the work of Levine, Pastor, Quillen et al [2] can be regarded as pioneering. They used hand eye co-ordination for robotic grasping using deep learning. Their system uses a grasp prediction network to choose the motor commands for</s>
|
<s>the robot that maximized the probability of a successful grasp. For the robotic gripper part, the work of Preseren, Augustin and Mravlje [3] can be referred to where they indicated the guidelines for designing the gripper arm. Different types of grippers are used for various purposes. There have been very few works on Bangla voice controlled robots in the past. One mentionable work was done by Bhattacharjee, Khan, Haidar [4] on Bangla voice control robot for rescue operation in noisy environment. In their paper they generated a small codebook in order to implement from a short list of commands. They used traditional speech recognition algorithms in their process. Scheider, Sturn and Stachniss have shown [5] the use of tactile sensors to classify objects without any visual feed. Reference to the works by House, Malkin Bilmes[6] and RZhou, KPNg and YSNg[7] can also be made as the different models are closely related to the current work and can be further developed to build other innovative systems. We are discussing our model combining our ideas with some ideas from their works to develop a low cost Design of a Voice Controlled Robotic Gripper Arm using Neural Networks International Conference on Energy, Communication, Data Analytics and Soft Computing (ICECDS-2017)II. SPEECH RECOGNITION MODULE Instead of traditional algorithms, Recurrent Neural Networks (RNN) are better at speech recognition. The decision of choosing RNN over other ML or DL methods is that in a feed forward neural network, signal flow in only one direction from input to output. But in RNN, each step of the output is tied to the prior input. So RNNs are more suitable while using speech recognition as the sample length is a variable of time. The speech recognition module of our model consists of: Low noise audio input portion Computing portion For the prototype version, at first the training input audios are taken by a microphone and then sampled at 16kHz and then quantized. After performing fast Fourier transformation and noise reduction on the data, it is plotted and after performing further processing, the data is fed to a recurrent neural network having 15 (or more, depending upon conditions) layers. This part of the model uses MATLAB to build the network. For the command recognition, an average quality microphone is used that takes audio input and determines which of the preset commands were given at that instance. For the prototype four bangla words are being used to train the neural net. 20-50 audio files need to be used for each command while training the neural net. Diagram1: Speech Recognition Module III. OBJECT CLASSIFIER MODULE A simple object classifier is the second module here that uses image processing and a neural network. The classifier in the prototype version is designed to choose from three different objects to determine the position of the object that is to be picked up. A portable 8MP camera in front of the gripping mechanism is used to take the visual feed. A separate hardware (a Raspberry-pi Model3) is</s>
|
<s>used to do the computation and to ensure the fact that there is little or no delay. After the robot recieves and analyzes a voice command to pick up a specific object, the classifier searches for that specific object by taking a photo of the vision field and then the position of the object is known. After taking a photo, it is converted into a simplified photo by applying gradient maps and background cancellation. The image processing is done using opencv. We will be using opencv for further development too as it is open source. The classifier discussed here uses Tensorflow. And the programming language is Python. The training images are to be taken from various angles under different lighting conditions to ensure maximum accuracy. Preferably. at least 150 pictures are needed to do the initial training under 5 lighting conditions for any efficient execution of this model. This increases the accuracy of detection and ensured greater rate of success at pinpointing the target object. Diagram2: Object Classifier Module IV. ROBOTIC GRIPPER ARM For the gripper part, a custom made 3D printed robotic arm with parallel jaw grippers are to be used which is controlled by three servo motors. While designing the gripper arm using SolidWorks some features requires attention. 1. Configuration of the gripper footprint 2. Choosing the required shape and size for clamps, brackets and extension with the possibility of further modification into more flexibility to adjustment 3. Selecting the right gripper considering the application 4. Actuation shaft to enable mobility Fig 1: Gripper Arm that can be subjected to a 360 degrees rotation along the horizontal plane. (the placement of the motors not shown in the figure) International Conference on Energy, Communication, Data Analytics and Soft Computing (ICECDS-2017)Fig 2: Gripper arm that can be subjected to a 180 degrees rotation along the vertical plane. (the placement of the motors not shown in the figure) After the voice command is then activated to reach that position and pick up the object command activates the object classifier to find the target object, we get the position of the object. In the prototype, the positions are one of three predetermined positions. The servo motors are given definite instructions according to predefined points for object placement to make the model simple. Potential improvement on this section is discussed later. The mechanism consists of the i) Object engagement & ii) Force locking. Then the object is released into a bucket placed at the right. The release cycle works on the same procedure in the reverse sequence. This whole process is conducted using an Arduino Mega. After putting the desired object in the bucket, the gripper comes back at the initial position and the voice command module is activated again for the next command. V. LIMITATIONS AND FUTURE WORK What have been discussed here is just the blueprint of a flexible design. Using slight modifications on this general design model (using different training data and slightly different grippers) can lead to different specialized</s>
|
<s>designs. We are working on building a prototype for a industrial grade specialized version Bangla voice controlled robotic gripping mechanism depending on this design. The prototype so far recognizes only four bangla voice commands by different users. If NLP is incorporated with this model, then the area of its uses can be broaden. But no such significant work have already been done regarding such Bangla language processing that could be incorporated while developing this robot. The gripping module here in this discussed design uses preset commands to grab the desired object that have been placed in one of three different preset positions. Incorporating hand-eye co-ordination [2] can lead to a significant amount of flexibility and accuracy. But this approach could not be included in this model due to lack of resources, both computational and financial. It is left for future works instead. The prototype is under process to incorporate more voice commands and the classifier is undergoing process for recognizing a wide variety of objects. Different approaches are being tested to reduce the amount of computation while training so that this process can be done by the user using their version of this robot. VI. APPLICATION There is probably only a few sectors where the voice controlled arm cannot be implemented in this era of rapid increase of automation. This kind of robots are well suited in industries where the bots can be commanded by voice commands to continue performing a certain task. The second major aspect of this is that it can be used for remote control easily after slight modification. These robotic gripper mechanisms can also be used in the medical sector. Very low cost simple prosthetic arms can easily be developed as a further modification of this prototype model. With the emergence of robotic surgery this voice controlled model may serve to enhance the purpose of distant surgeries as well as increasing the efficiency by performing tasks during nanoscale operations. The most outstanding breakthrough is the restoration of mobility and independence of paralyzed patients by using neural network [8], [9]. Also, this gripper can serve as a rescue bot during natural disasters. Since the working principle is based on voice recognition, rescue operations can be easily performed since the bot is programmed to react to specific audio input and cancel out irrelevant noises. It can also be entrusted with household chores and work as an efficient assistant in laboratories. It can also serve as an intelligent pet like cozmo and act as a mood booster. VII. CONCLUSION Standing on the stairway of advancing Neural Network based modules in practically every aspect of modern lives, it is perhaps high time to replace the manual parts with highly efficient Neural Network incorporated machine parts. This voice controlled hand can potentially recognize any commands in Bangla and act accordingly if trained well. Rescue missions can be conducted in dangerous situations, remote surgeries can be performed, productivity in the industrial sections can be subjected to rapid growth and further modification of</s>
|
<s>this module can render accessibility of usage in basically any sector. A low cost robotic system is very much needed in the context of the developing economy of Bangladesh where Bangla is the main spoken language. International Conference on Energy, Communication, Data Analytics and Soft Computing (ICECDS-2017)References [1] Haytham M. Fayek, Margaret Lech, Lawrence Cavedon “Evaluating deep learning architectures for Speech Emotion Recognition” in Elsevier Journal on Neural Networks, August 2017 special Ed, vol 92, pp 60-68. [2] Sergey Levine, Peter Pastor, Alex Krizhevsky, “Learning Hand-Eye Coordination for Robotic Grasping with Deep Learning and Large-Scale Data Collection” on 2016 International Symposium on Experimental Robotics, 21 March 2017, pp173-184. [3] Greg Causey, “Guidelines for the design of robotic gripping systems” on Assembly Automation, vol 23, Issue 1, pp18-28. [4] Arnab Bhattacharjee, Asir Intisar Khan, M.Z.Haider,” Bangla Voice Controlled Robot for Rescue Operation in Noisy Environment” on Region 10 Conference (TENCON) 2016 IEEE. [5] Alexander Schneider, Jugen Strum, Cyrill Stachniss, “Object Identification with Tactile Sensors using Bag-of-Features” on Intelligent Robots and Systems, 2009. [6] Brandi House, Jonathan Malkin, Jeff Bilmes, “The VoiceBot: a voice controlled robot arm” on CHI ’09, Proceedings of the SIGCHI Conference on Human Factors in Computing System, pp 183-192. [7] R. Zhou, K.P.Ng, Y.S.Ng, “A Voice Controlled Robot Using Neural Network” on Intelligent Information Systems, 1994. [8] Meel Velliste, Sagi Perel, M. Chance Spalding, “Cortical control of a prosthetic arm for self-feeding” on Nature 453, 1098-1101, 19 June 2008. International Conference on Energy, Communication, Data Analytics and Soft Computing (ICECDS-2017) /ASCII85EncodePages false /AllowTransparency false /AutoPositionEPSFiles false /AutoRotatePages /None /Binding /Left /CalGrayProfile (Gray Gamma 2.2) /CalRGBProfile (sRGB IEC61966-2.1) /CalCMYKProfile (U.S. Web Coated \050SWOP\051 v2) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Warning /CompatibilityLevel 1.4 /CompressObjects /Off /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages true /CreateJobTicket false /DefaultRenderingIntent /Default /DetectBlends true /DetectCurves 0.0000 /ColorConversionStrategy /LeaveColorUnchanged /DoThumbnails false /EmbedAllFonts true /EmbedOpenType false /ParseICCProfilesInComments true /EmbedJobOptions true /DSCReportingLevel 0 /EmitDSCWarnings false /EndPage -1 /ImageMemory 1048576 /LockDistillerParams true /MaxSubsetPct 100 /Optimize false /OPM 0 /ParseDSCComments false /ParseDSCCommentsForDocInfo false /PreserveCopyPage true /PreserveDICMYKValues true /PreserveEPSInfo false /PreserveFlatness true /PreserveHalftoneInfo true /PreserveOPIComments false /PreserveOverprintSettings true /StartPage 1 /SubsetFonts false /TransferFunctionInfo /Remove /UCRandBGInfo /Preserve /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true /Arial-Black /Arial-BoldItalicMT /Arial-BoldMT /Arial-ItalicMT /ArialMT /ArialNarrow /ArialNarrow-Bold /ArialNarrow-BoldItalic /ArialNarrow-Italic /ArialUnicodeMS /BookAntiqua /BookAntiqua-Bold /BookAntiqua-BoldItalic /BookAntiqua-Italic /BookmanOldStyle /BookmanOldStyle-Bold /BookmanOldStyle-BoldItalic /BookmanOldStyle-Italic /BookshelfSymbolSeven /Century /CenturyGothic /CenturyGothic-Bold /CenturyGothic-BoldItalic /CenturyGothic-Italic /CenturySchoolbook /CenturySchoolbook-Bold /CenturySchoolbook-BoldItalic /CenturySchoolbook-Italic /ComicSansMS /ComicSansMS-Bold /CourierNewPS-BoldItalicMT /CourierNewPS-BoldMT /CourierNewPS-ItalicMT /CourierNewPSMT /EstrangeloEdessa /FranklinGothic-Medium /FranklinGothic-MediumItalic /Garamond /Garamond-Bold /Garamond-Italic /Gautami /Georgia /Georgia-Bold /Georgia-BoldItalic /Georgia-Italic /Haettenschweiler /Impact /Kartika /Latha /LetterGothicMT /LetterGothicMT-Bold /LetterGothicMT-BoldOblique /LetterGothicMT-Oblique /LucidaConsole /LucidaSans /LucidaSans-Demi /LucidaSans-DemiItalic /LucidaSans-Italic /LucidaSansUnicode /Mangal-Regular /MicrosoftSansSerif /MonotypeCorsiva /MSReferenceSansSerif /MSReferenceSpecialty /MVBoli /PalatinoLinotype-Bold /PalatinoLinotype-BoldItalic /PalatinoLinotype-Italic /PalatinoLinotype-Roman /Raavi /Shruti /Sylfaen /SymbolMT /Tahoma /Tahoma-Bold /TimesNewRomanMT-ExtraBold /TimesNewRomanPS-BoldItalicMT /TimesNewRomanPS-BoldMT /TimesNewRomanPS-ItalicMT /TimesNewRomanPSMT /Trebuchet-BoldItalic /TrebuchetMS /TrebuchetMS-Bold /TrebuchetMS-Italic /Tunga-Regular /Verdana /Verdana-Bold /Verdana-BoldItalic /Verdana-Italic /Vrinda /Webdings /Wingdings2 /Wingdings3 /Wingdings-Regular /ZWAdobeF /NeverEmbed [ true /AntiAliasColorImages false /CropColorImages true /ColorImageMinResolution 200 /ColorImageMinResolutionPolicy /OK /DownsampleColorImages true /ColorImageDownsampleType /Bicubic /ColorImageResolution 300 /ColorImageDepth -1 /ColorImageMinDownsampleDepth 1 /ColorImageDownsampleThreshold 1.50000 /EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages false /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /ColorImageDict</s>
|
<s><< /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 200 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 1.50000 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages false /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /GrayImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 400 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 600 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.50000 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1 /AllowPSXObjects false /CheckCompliance [ /None /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXOutputIntentProfile (None) /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False /CreateJDFFile false /Description << /CHS <FEFF4f7f75288fd94e9b8bbe5b9a521b5efa7684002000410064006f006200650020005000440046002065876863900275284e8e55464e1a65876863768467e5770b548c62535370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c676562535f00521b5efa768400200050004400460020658768633002> /CHT <FEFF4f7f752890194e9b8a2d7f6e5efa7acb7684002000410064006f006200650020005000440046002065874ef69069752865bc666e901a554652d965874ef6768467e5770b548c52175370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c4f86958b555f5df25efa7acb76840020005000440046002065874ef63002> /DAN <FEFF004200720075006700200069006e0064007300740069006c006c0069006e006700650072006e0065002000740069006c0020006100740020006f007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400650072002c0020006400650072002000650067006e006500720020007300690067002000740069006c00200064006500740061006c006a006500720065007400200073006b00e60072006d007600690073006e0069006e00670020006f00670020007500640073006b007200690076006e0069006e006700200061006600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020004400650020006f007000720065007400740065006400650020005000440046002d0064006f006b0075006d0065006e0074006500720020006b0061006e002000e50062006e00650073002000690020004100630072006f00620061007400200065006c006c006500720020004100630072006f006200610074002000520065006100640065007200200035002e00300020006f00670020006e0079006500720065002e> /DEU <FEFF00560065007200770065006e00640065006e0020005300690065002000640069006500730065002000450069006e007300740065006c006c0075006e00670065006e0020007a0075006d002000450072007300740065006c006c0065006e00200076006f006e002000410064006f006200650020005000440046002d0044006f006b0075006d0065006e00740065006e002c00200075006d002000650069006e00650020007a0075007600650072006c00e40073007300690067006500200041006e007a006500690067006500200075006e00640020004100750073006700610062006500200076006f006e00200047006500730063006800e40066007400730064006f006b0075006d0065006e00740065006e0020007a0075002000650072007a00690065006c0065006e002e00200044006900650020005000440046002d0044006f006b0075006d0065006e007400650020006b00f6006e006e0065006e0020006d006900740020004100630072006f00620061007400200075006e0064002000520065006100640065007200200035002e003000200075006e00640020006800f600680065007200200067006500f600660066006e00650074002000770065007200640065006e002e> /ESP <FEFF005500740069006c0069006300650020006500730074006100200063006f006e0066006900670075007200610063006900f3006e0020007000610072006100200063007200650061007200200064006f00630075006d0065006e0074006f0073002000640065002000410064006f00620065002000500044004600200061006400650063007500610064006f007300200070006100720061002000760069007300750061006c0069007a00610063006900f3006e0020006500200069006d0070007200650073006900f3006e00200064006500200063006f006e006600690061006e007a006100200064006500200064006f00630075006d0065006e0074006f007300200063006f006d00650072006300690061006c00650073002e002000530065002000700075006500640065006e00200061006200720069007200200064006f00630075006d0065006e0074006f00730020005000440046002000630072006500610064006f007300200063006f006e0020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e003000200079002000760065007200730069006f006e0065007300200070006f00730074006500720069006f007200650073002e> /FRA <FEFF005500740069006c006900730065007a00200063006500730020006f007000740069006f006e00730020006100660069006e00200064006500200063007200e900650072002000640065007300200064006f00630075006d0065006e00740073002000410064006f006200650020005000440046002000700072006f00660065007300730069006f006e006e0065006c007300200066006900610062006c0065007300200070006f007500720020006c0061002000760069007300750061006c00690073006100740069006f006e0020006500740020006c00270069006d007000720065007300730069006f006e002e0020004c0065007300200064006f00630075006d0065006e00740073002000500044004600200063007200e900e90073002000700065007500760065006e0074002000ea0074007200650020006f007500760065007200740073002000640061006e00730020004100630072006f006200610074002c002000610069006e00730069002000710075002700410064006f00620065002000520065006100640065007200200035002e0030002000650074002000760065007200730069006f006e007300200075006c007400e90072006900650075007200650073002e> /ITA (Utilizzare queste impostazioni per creare documenti Adobe PDF adatti per visualizzare e stampare documenti aziendali in modo affidabile. I documenti PDF creati possono essere aperti con Acrobat e Adobe Reader 5.0 e versioni successive.) /JPN <FEFF30d330b830cd30b9658766f8306e8868793a304a3088307353705237306b90693057305f002000410064006f0062006500200050004400460020658766f8306e4f5c6210306b4f7f75283057307e305930023053306e8a2d5b9a30674f5c62103055308c305f0020005000440046002030d530a130a430eb306f3001004100630072006f0062006100740020304a30883073002000410064006f00620065002000520065006100640065007200200035002e003000204ee5964d3067958b304f30533068304c3067304d307e305930023053306e8a2d5b9a3067306f30d530a930f330c8306e57cb30818fbc307f3092884c3044307e30593002> /KOR <FEFFc7740020c124c815c7440020c0acc6a9d558c5ec0020be44c988b2c8c2a40020bb38c11cb97c0020c548c815c801c73cb85c0020bcf4ace00020c778c1c4d558b2940020b3700020ac00c7a50020c801d569d55c002000410064006f0062006500200050004400460020bb38c11cb97c0020c791c131d569b2c8b2e4002e0020c774b807ac8c0020c791c131b41c00200050004400460020bb38c11cb2940020004100630072006f0062006100740020bc0f002000410064006f00620065002000520065006100640065007200200035002e00300020c774c0c1c5d0c11c0020c5f40020c2180020c788c2b5b2c8b2e4002e> /NLD (Gebruik deze instellingen om Adobe PDF-documenten te maken waarmee zakelijke documenten betrouwbaar kunnen worden weergegeven en afgedrukt. De gemaakte PDF-documenten kunnen worden geopend met Acrobat en Adobe Reader 5.0 en hoger.) /NOR <FEFF004200720075006b00200064006900730073006500200069006e006e007300740069006c006c0069006e00670065006e0065002000740069006c002000e50020006f0070007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e00740065007200200073006f006d002000650072002000650067006e0065007400200066006f00720020007000e5006c006900740065006c006900670020007600690073006e0069006e00670020006f00670020007500740073006b007200690066007400200061007600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020005000440046002d0064006f006b0075006d0065006e00740065006e00650020006b0061006e002000e50070006e00650073002000690020004100630072006f00620061007400200065006c006c00650072002000410064006f00620065002000520065006100640065007200200035002e003000200065006c006c00650072002e> /PTB <FEFF005500740069006c0069007a006500200065007300730061007300200063006f006e00660069006700750072006100e700f50065007300200064006500200066006f0072006d00610020006100200063007200690061007200200064006f00630075006d0065006e0074006f0073002000410064006f00620065002000500044004600200061006400650071007500610064006f00730020007000610072006100200061002000760069007300750061006c0069007a006100e700e3006f002000650020006100200069006d0070007200650073007300e3006f00200063006f006e0066006900e1007600650069007300200064006500200064006f00630075006d0065006e0074006f007300200063006f006d0065007200630069006100690073002e0020004f007300200064006f00630075006d0065006e0074006f00730020005000440046002000630072006900610064006f007300200070006f00640065006d0020007300650072002000610062006500720074006f007300200063006f006d0020006f0020004100630072006f006200610074002000650020006f002000410064006f00620065002000520065006100640065007200200035002e0030002000650020007600650072007300f50065007300200070006f00730074006500720069006f007200650073002e> /SUO <FEFF004b00e40079007400e40020006e00e40069007400e4002000610073006500740075006b007300690061002c0020006b0075006e0020006c0075006f0074002000410064006f0062006500200050004400460020002d0064006f006b0075006d0065006e007400740065006a0061002c0020006a006f0074006b006100200073006f0070006900760061007400200079007200690074007900730061007300690061006b00690072006a006f006a0065006e0020006c0075006f00740065007400740061007600610061006e0020006e00e400790074007400e4006d0069007300650065006e0020006a0061002000740075006c006f007300740061006d0069007300650065006e002e0020004c0075006f0064007500740020005000440046002d0064006f006b0075006d0065006e00740069007400200076006f0069006400610061006e0020006100760061007400610020004100630072006f0062006100740069006c006c00610020006a0061002000410064006f00620065002000520065006100640065007200200035002e0030003a006c006c00610020006a006100200075007500640065006d006d0069006c006c0061002e> /SVE <FEFF0041006e007600e4006e00640020006400650020006800e4007200200069006e0073007400e4006c006c006e0069006e006700610072006e00610020006f006d002000640075002000760069006c006c00200073006b006100700061002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400200073006f006d00200070006100730073006100720020006600f60072002000740069006c006c006600f60072006c00690074006c006900670020007600690073006e0069006e00670020006f006300680020007500740073006b007200690066007400650072002000610076002000610066006600e4007200730064006f006b0075006d0065006e0074002e002000200053006b006100700061006400650020005000440046002d0064006f006b0075006d0065006e00740020006b0061006e002000f600700070006e00610073002000690020004100630072006f0062006100740020006f00630068002000410064006f00620065002000520065006100640065007200200035002e00300020006f00630068002000730065006e006100720065002e> /ENU (Use these settings to create PDFs that match the "Required" settings for PDF Specification 4.01)>> setdistillerparams /HWResolution [600 600] /PageSize [612.000 792.000]>> setpagedevice</s>
|
<s>Domain Specific Intelligent Personal Assistant with Bilingual Voice Command ProcessingDomain Specific Intelligent Personal Assistant with Bilingual Voice Command Processing Saadman Shahid Chowdhury Department of Electrical and Computer Engineering North South University Dhaka, Bangladesh saadman.shahid@gmail.com Tanzilur Rahman Department of Electrical and Computer Engineering North South University Dhaka, Bangladesh tanzilur.rahman@northsouth.edu Atiar Talukdar Department of Electrical and Computer Engineering North South University Dhaka, Bangladesh swajan.talukdar@gmail.com Ashik Mahmud Department of Electrical and Computer Engineering North South University Dhaka, Bangladesh ashik.mahmud137@gmail.com Abstract — Intelligent Personal Assistants (IPA), like Siri and Alexa, are created to assist their users with simple digital tasks. Here, we propose the steps we have used to develop a voice operated IPA which can process direct commands in two languages: English and Bengali, to perform menial tasks for the users. The speech recognition engine of the IPA is constructed with Sphinx-4, and the language processing is performed by a modified finite state automaton. The IPA also takes advantage of the subject/action structure of commands to reduce the size of the word domain, and utilizes a generalization function to ensure that the language processor can understand multiple languages without undergoing major modification - making this approach suitable when training data is limited. Keywords — Digital Assistant, Artificial Intelligence, Automated Speech Recognition, Finite State Automaton, Language Processing, Voice User Interface. I. INTRODUCTION An Intelligent Personal Assistant (IPA) is a computer program with Artificial Intelligence (AI), designed to aid its users with their tasks. The IPA communicates seamlessly with its users while it answers their inquiries or performs actions to satisfy their requests [1]. Modern IPAs can perform a wide variety of tasks - ranging from binary tasks like opening an app or setting an alarm, to more complex tasks like noting down dictations or making phone calls. Notable examples of such IPAs are Google Assistant from Google, Siri from Apple, and Alexa from Amazon. IPAs do not necessarily need to communicate only through voice, but modern IPAs are pushing towards Voice User Interfacing (VUI), i.e. interacting with users only through voice, without the use of screens or physical interaction [2]. This requires the IPA to: (A) listen to human speech, (B) understand what is being implied, and (C) perform an action or reply with their own synthesized voice [3]. This concept is visualized in Fig.1. Firstly, for (A), the IPA needs to have Automated Speech Recognition (ASR) capabilities - one such ASR module creation tools is the Sphinx-4 framework, created by Carnegie Mellon University, which uses Hidden Markov Models - a probabilistic approach to classifying audio signals into words [4]. Furthermore, ASR modules require use of linguistic knowledge libraries called acoustic models for each language. In our research, we have used the Sphinx-4 and its aiding software: Sphinx-Train to create a speech recognition model to suit our IPA, which understands domain specific commands in two languages: English and Bengali [5][6]. Secondly, for (B), the IPA needs to use a Natural Language Processing (NLP) module. The NLP’s task is to infer what is being implied</s>
|
<s>in a sentence, and decide on the course of action in response. A simple, yet effective, way of creating an NLP module is to use Deterministic Finite State Automatons (FSA) [7] - which are unidirectional graphs with two types of vertices: accepting states and non-accepting states. Through traversal of the graph, if an accepting state is reached, an action takes place; otherwise, a non-accepting state is reached, where no action takes place or an error is thrown [8]. For the purpose of this paper, we have created a functional IPA, which can do menial tasks like turning on/off smartphone applications. It is able to process “audio commands” and perform actions through VUI. This paper is written to assist the readers in replicating our steps to create their own IPA. Finally, to demonstrate how the IPA can be fine-tuned for multiple languages, we have designed our IPA to understand two languages: English and Bengali. In II.Methodology, we describe the steps in detail and give instructions on how to build the ASR and NLP, and how to improve the effectiveness of the IPA by identifying the “subject/action” structure of commands. In III.Limitations, we note what the shortcomings of our approach is. Finally, In IV.Conclusion we provide pointers on how our work can be further improved upon to create a more effective IPA. Fig. 1. Concept of IPA Proceedings of TENCON 2018 - 2018 IEEE Region 10 Conference (Jeju, Korea, 28-31 October 2018)0731978-1-5386-5457-6/18/$31.00 ©2018 IEEEII. METHODOLOGY To make our methodology easier to understand, we have shown the flow of information through the IPA in Fig.2, starting from (1) “User speaks to IPA” to (7) “Observable action”. Where (1) is the input to the IPA, and (7) is the output from the IPA. Briefly - from (1) an audio file is produced, which is passed through the ASR module (2),(3) - creating a text file, i.e. the spoken sentence into computer text. The text file is then passed through the NLP module (4),(5) which attempts to infer what is being implied in the sentence - if it cannot comprehend the user’s command, it throws an error; but if it can understand what is being implied, it passes a specific key to the next module. Step (6) accepts the key and performs an action depending on the particular key. Each step along the way, from (2) to (7), are described in the Methodology in detail. The interaction between the modules visualized in Fig.1. A. Step 2: Audio Processing In step (1), the user speaks into the microphone of the device, which creates an audio file. The audio file is an array of integers containing information on the audio signal. Depending on the recording device, the audio file might have characteristics that cannot be processed by speech recognition engines made using Sphinx-4. Hence, the audio file’s properties must first be converted to Mono Channels, 16000Hz Sampling Rate, and 32-bit Sample Size. This standardizes all the audio inputs to the speech recognition engine. B. Step 3: Speech</s>
|
<s>Recognition Engine The Sphinx framework requires 3 files to operate: Language Model (LM), Acoustic Model (AM), and the Dictionary. The Dictionary is simply a list of words which our ASR is able to classify; the full list of words in our dictionary is in Fig.3. The LM is the phonetic breakdown of the words in the Dictionary – it contains the sequence of phonetic units (phonemes) that make up those words; the LM can be acquired by uploading the Dictionary to the Sphinx Knowledge Base Tool website. The AM is a file that contains probabilistic values which are used to identify phonemes from the values in audio files – through the use of Hidden Markov Models; the AM can be created using the SphinxTrain tool – which accepts the speech corpus: audio AM training files and transcript files. The transcript is simply just the text representations of the audio files. Our training corpus for SphinxTrain is 2.5 Hours long and has recordings of 120 speakers, speaking 35 different words, 10 times each in different pitch and accent – this is sufficient for domain specific speech recognizers. The IPA has been designed with efficiency in mind. We have identified a structure in direct verbal commands, where there is always atleast one “subject” and atleast one action that needs to be performed on that “action” - all the other words are usually redundant. By only identifying the subject words and action words, and discarding the rest of the words in the command, it is possible to understand what the IPA needs to do. e.g. in the instruction: “could you please turn on the facebook app?”, by discarding majority of the words, and only identifying the subject word: “facebook”, and action word: “on” - the IPA can deduce what do next. Because fewer words are used, the need of processing is less and the success rate is higher - yet, to the user - the effect is the same. Fig. 2. Flow of information through the IPA Proceedings of TENCON 2018 - 2018 IEEE Region 10 Conference (Jeju, Korea, 28-31 October 2018)0732C. Step 4: Generalizer The Generalizer allows the IPA to be multilingual. It is a function that takes each word from step (3) as input and then outputs a token or a null value for each word. An outputted token is a word which is a more “general definition” of the inputted word. e.g. Generalize ( “start” ) = “ON”. Here, the input word: “start”, is generalized to the output token: “ON”, which is a more general definition of “start”. The generalizer is important because there are multiple words (usually synonyms of each other), which imply the same meaning in a certain context. e.g., similar to “start” - Generalize(“open”) = “ON” Generalize(“on”) = “ON” Similarly, words in different languages which are also synonymous can be generalized to the same token, e.g. Generalize ( “kholo” ) = “ON” Generalize ( “chalu” ) = “ON” The above Bengali words: “kholo” (Meaning, “To open”), and</s>
|
<s>“chalu” (Meaning, “Begin”), are synonymous - and can be Generalized to the token “ON”. Also, any word that is not in the Generalizer is considered redundant and returns a null value. E.g., the word “please” is redundant, as it holds no meaning when delivering commands. Here lies the importance of the generalizer, by converting the input words to generalized tokens, the complexity of the FSA in step (5) is reduced greatly, and the FSA is able to operate in multiple languages as well. Table 1 contains a list of all the tokens used in our IPA. The following are some examples of the generalizer in action: Generalize ( “ please open the facebook app ” ) = “ON”, “FACEBOOK” Generalize ( “turn off facebook” ) = “OFF”, “FACEBOOK” Similarly, the two below sentences are direct translations of the above English sentences. Note, the “sequence of tokens” are not the same as above because “subjects” are placed before “actions” in Bengali (this is accounted for in Step 5). Generalize ( “ doya kore facebook chalu koro ” ) = “FACEBOOK”, “ON” Generalize ( “facebook bondho koro” ) = “FACEBOOK”, “OFF” D. Step 5: Finite State Automata In the deterministic finite state automaton, traversal between the current state and an adjacent state can only take place - if the “required token” of one of the outgoing edges of the current state equals the “input token”. When a “sequence of tokens”, from step (6), is inputted to the FSA - depending on the sequence, the FSA is traversed and a particular end state is reached - and if the end state is an accepting state, the FSA sends instructions to the next step to perform an action depending on the specific end state. We have designed our FSA to perform actions in response to the user’s commands. In step (6), the Generalizer deduces the what the user is saying - and based on that information - in step (7) the FSA deduces what the IPA needs to do. Our design of the FSA is shown in Fig.3. Furthermore, we have upgraded our FSA to have a third type of state: “Question State”. This state is reached when the IPA cannot reach an accepting state because it does not have full information or did not understand the user’s question - so it sends instructions to the output interface to ask the user for more information. As an example, consider from Fig.4, if the user says: “The facebook app” - the token, “FACEBOOK”, is inputed to the FSA and state 2 is reached. Here, the FSA does not know what to do with the facebook app: “should the app be opened or closed”. Hence state 2 is a Question State. After the user responds with an answer, the FSA starts traversal from the Question State ( state 2, here ), and either reaches an end state or is unable to comply. When the FSA reaches an accepting state or question state, a “key” with a</s>
|
<s>specific ID is generated and inputted to Step 6. This “key” acts as a signal to the Application Functionality on what action to perform. E. Step 6: Application Functionality The application functionality is the program which performs observable actions for the user on the user interface (which can be visual or audio), or can either observe actions taking place, or receive any response/errors from the IPA. Here, we simply created a switch-case where each case calls a different function that communicates with the hardware. The hardware then performs the action programmed in the particular function leading to Step (7): User observes action. TABLE I: LIST OF WORD/TOKEN IN THE GENERALIZER Token Word Token Word OFF OffOpen CloseBegin StopInitiate BondhoStart ThamaoLaunch CHROME ChromeCharo BrowserKholo FACEBOOK FacebookChalu Ef BeChalao Jalao Fig. 3. The design of the Finite State Automaton Proceedings of TENCON 2018 - 2018 IEEE Region 10 Conference (Jeju, Korea, 28-31 October 2018)0733Using Fig.2, as a roadmap, the following is an example of a successful execution of a command - Step 1: the user says: “please open the facebook app”. Step 2: the audio file is preprocessed. Step 3: The ASR classifies the action word: “open” and subject word: “facebook”. Step 4: The Generalizer, from Table 1, generates the tokens: “ON”, “FACEBOOK”. Step 5: The FSA, in Fig.3, reaches accepting state 5, and passes key_5 forward. Step 6: The application functionality calls the function associated with key_5 to instruct the device’s operating system to initiate the Facebook App. Step 7: The user observes the Facebook App being opened. III. LIMITATIONS The Sphinx-4 framework has its own audio noise reduction system, but it is limited in its abilities, and from our experience, attempting to add our own noise cancelling module increases the word error rate of existing acoustic models. The FSA needs to be designed by the developer - hence, when creating an FSA to process more complex commands, the developer needs to ensure that all possible cases are accounted for. However, for simple and direct commands and when datasets are not available, the FSA with Generalizer is a much more cost effective solution than machine learning. As for the Generalizer, two words may have the same spelling but have different meanings, e.g. “bat” can be a playing stick or a flying mammal. This ambiguity is not prevalent in domain specific applications, but may be an issue when scaling up. IV. CONCLUSION The approach specified in this paper is suitable for creating an Intelligent Personal Assistant which is capable of understanding domain specific direct voice commands from users. By selecting only the subject words and action words for training, the size required for the corpus is smaller. The generalizer and FSA allows the creation of an efficient solution to natural language processing when developers do not have access to sufficient training data. Although, this approach has limitations (i.e. it is useful only when the IPA needs to be domain specific), it can be further improved. Some recommended improvements: by using a larger</s>
|
<s>audio/speech corpus for training the CMU Sphinx, the success rate can be increased significantly, the Generalizer can also be designed to be context sensitive to lessen the issue of ambiguity, and the addition of a voice synthesizer after the application functionality step can turn the IPA into a complete VUI. REFERENCES [1] J. Kurpansky, "What Is an Intelligent Digital Assistant?", Medium, 2017. [2] J. Iso-Sipila, M. Moberg, and O. Viikki, "Multi-lingual speaker-independent voice user interface for mobile devices," in ICASSP 2006. IEEE, 2006, vol. I, pp. 1081-1084. [3] S. Springenberg, "Intelligent Personal Assistants. In: Speech Technology Seminar, Institute of Computer Science Hamburg, 2016, pp. 5-7 [4] M. Vasilache, J. Iso-Sipilä and O. iikki, "On a Practical Design of a Low Complexity Speech Recognition ngine", in Proc. Int. Conf. on Acoustics, Speech and Signal Processing, Montreal, Quebec, Canada, 2004, vol. 5, pp. 113-116. [5] M. M. H. Nahid, M. A. Islam and M. S. Islam, "A noble approach for recognizing Bangla real number automatically using CMU Sphinx4," 2016 5th International Conference on Informatics, Electronics and Vision (ICIEV), Dhaka, 2016, pp. 844-849. [6] P. K. Sahu and D. S. Ganesh, "A study on automatic speech recognition toolkits," 2015 International Conference on Microwave, Optical and Communication Engineering (ICMOCE), Bhubaneswar, 2015, pp. 365-368. [7] R. Rangra and Madhusudan, "Natural language parsing: Using finite state automata," 2016 3rd International Conference on Computing for Sustainable Global Development (INDIACom), New Delhi, 2016, pp. 456-463. [8] John E. Hopcroft, Rajeev Motwani, Jeffrey D. Ullman, Introduction to Automata Theory Languages and Computation. Proceedings of TENCON 2018 - 2018 IEEE Region 10 Conference (Jeju, Korea, 28-31 October 2018)0734 /ASCII85EncodePages false /AllowTransparency false /AutoPositionEPSFiles true /AutoRotatePages /None /Binding /Left /CalGrayProfile (Gray Gamma 2.2) /CalRGBProfile (sRGB IEC61966-2.1) /CalCMYKProfile (U.S. Web Coated \050SWOP\051 v2) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Error /CompatibilityLevel 1.7 /CompressObjects /Off /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages true /CreateJobTicket false /DefaultRenderingIntent /Default /DetectBlends true /DetectCurves 0.0000 /ColorConversionStrategy /LeaveColorUnchanged /DoThumbnails false /EmbedAllFonts true /EmbedOpenType false /ParseICCProfilesInComments true /EmbedJobOptions true /DSCReportingLevel 0 /EmitDSCWarnings false /EndPage -1 /ImageMemory 1048576 /LockDistillerParams true /MaxSubsetPct 100 /Optimize true /OPM 0 /ParseDSCComments false /ParseDSCCommentsForDocInfo true /PreserveCopyPage true /PreserveDICMYKValues true /PreserveEPSInfo false /PreserveFlatness true /PreserveHalftoneInfo true /PreserveOPIComments false /PreserveOverprintSettings true /StartPage 1 /SubsetFonts true /TransferFunctionInfo /Remove /UCRandBGInfo /Preserve /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true /AbadiMT-CondensedLight /ACaslon-Italic /ACaslon-Regular /ACaslon-Semibold /ACaslon-SemiboldItalic /AdobeArabic-Bold /AdobeArabic-BoldItalic /AdobeArabic-Italic /AdobeArabic-Regular /AdobeHebrew-Bold /AdobeHebrew-BoldItalic /AdobeHebrew-Italic /AdobeHebrew-Regular /AdobeHeitiStd-Regular /AdobeMingStd-Light /AdobeMyungjoStd-Medium /AdobePiStd /AdobeSongStd-Light /AdobeThai-Bold /AdobeThai-BoldItalic /AdobeThai-Italic /AdobeThai-Regular /AGaramond-Bold /AGaramond-BoldItalic /AGaramond-Italic /AGaramond-Regular /AGaramond-Semibold /AGaramond-SemiboldItalic /AgencyFB-Bold /AgencyFB-Reg /AGOldFace-Outline /AharoniBold /Algerian /Americana /Americana-ExtraBold /AndaleMono /AndaleMonoIPA /AngsanaNew /AngsanaNew-Bold /AngsanaNew-BoldItalic /AngsanaNew-Italic /AngsanaUPC /AngsanaUPC-Bold /AngsanaUPC-BoldItalic /AngsanaUPC-Italic /Anna /ArialAlternative /ArialAlternativeSymbol /Arial-Black /Arial-BlackItalic /Arial-BoldItalicMT /Arial-BoldMT /Arial-ItalicMT /ArialMT /ArialMT-Black /ArialNarrow /ArialNarrow-Bold /ArialNarrow-BoldItalic /ArialNarrow-Italic /ArialRoundedMTBold /ArialUnicodeMS /ArrusBT-Bold /ArrusBT-BoldItalic /ArrusBT-Italic /ArrusBT-Roman /AvantGarde-Book /AvantGarde-BookOblique /AvantGarde-Demi /AvantGarde-DemiOblique /AvantGardeITCbyBT-Book /AvantGardeITCbyBT-BookOblique /BakerSignet /BankGothicBT-Medium /Barmeno-Bold /Barmeno-ExtraBold /Barmeno-Medium /Barmeno-Regular /Baskerville /BaskervilleBE-Italic /BaskervilleBE-Medium /BaskervilleBE-MediumItalic /BaskervilleBE-Regular /Baskerville-Bold /Baskerville-BoldItalic /Baskerville-Italic /BaskOldFace /Batang /BatangChe /Bauhaus93 /Bellevue /BellMT /BellMTBold /BellMTItalic /BerlingAntiqua-Bold /BerlingAntiqua-BoldItalic /BerlingAntiqua-Italic /BerlingAntiqua-Roman /BerlinSansFB-Bold /BerlinSansFBDemi-Bold /BerlinSansFB-Reg /BernardMT-Condensed /BernhardModernBT-Bold /BernhardModernBT-BoldItalic /BernhardModernBT-Italic /BernhardModernBT-Roman /BiffoMT /BinnerD /BinnerGothic /BlackadderITC-Regular /Blackoak /blex /blsy /Bodoni /Bodoni-Bold /Bodoni-BoldItalic /Bodoni-Italic /BodoniMT /BodoniMTBlack /BodoniMTBlack-Italic /BodoniMT-Bold /BodoniMT-BoldItalic /BodoniMTCondensed /BodoniMTCondensed-Bold /BodoniMTCondensed-BoldItalic /BodoniMTCondensed-Italic</s>
|
<s>/BodoniMT-Italic /BodoniMTPosterCompressed /Bodoni-Poster /Bodoni-PosterCompressed /BookAntiqua /BookAntiqua-Bold /BookAntiqua-BoldItalic /BookAntiqua-Italic /Bookman-Demi /Bookman-DemiItalic /Bookman-Light /Bookman-LightItalic /BookmanOldStyle /BookmanOldStyle-Bold /BookmanOldStyle-BoldItalic /BookmanOldStyle-Italic /BookshelfSymbolOne-Regular /BookshelfSymbolSeven /BookshelfSymbolThree-Regular /BookshelfSymbolTwo-Regular /Botanical /Boton-Italic /Boton-Medium /Boton-MediumItalic /Boton-Regular /Boulevard /BradleyHandITC /Braggadocio /BritannicBold /Broadway /BrowalliaNew /BrowalliaNew-Bold /BrowalliaNew-BoldItalic /BrowalliaNew-Italic /BrowalliaUPC /BrowalliaUPC-Bold /BrowalliaUPC-BoldItalic /BrowalliaUPC-Italic /BrushScript /BrushScriptMT /CaflischScript-Bold /CaflischScript-Regular /Calibri /Calibri-Bold /Calibri-BoldItalic /Calibri-Italic /CalifornianFB-Bold /CalifornianFB-Italic /CalifornianFB-Reg /CalisMTBol /CalistoMT /CalistoMT-BoldItalic /CalistoMT-Italic /Cambria /Cambria-Bold /Cambria-BoldItalic /Cambria-Italic /CambriaMath /Candara /Candara-Bold /Candara-BoldItalic /Candara-Italic /Carta /CaslonOpenfaceBT-Regular /Castellar /CastellarMT /Centaur /Centaur-Italic /Century /CenturyGothic /CenturyGothic-Bold /CenturyGothic-BoldItalic /CenturyGothic-Italic /CenturySchL-Bold /CenturySchL-BoldItal /CenturySchL-Ital /CenturySchL-Roma /CenturySchoolbook /CenturySchoolbook-Bold /CenturySchoolbook-BoldItalic /CenturySchoolbook-Italic /CGTimes-Bold /CGTimes-BoldItalic /CGTimes-Italic /CGTimes-Regular /CharterBT-Bold /CharterBT-BoldItalic /CharterBT-Italic /CharterBT-Roman /CheltenhamITCbyBT-Bold /CheltenhamITCbyBT-BoldItalic /CheltenhamITCbyBT-Book /CheltenhamITCbyBT-BookItalic /Chiller-Regular /Cmb10 /CMB10 /Cmbsy10 /CMBSY10 /CMBSY5 /CMBSY6 /CMBSY7 /CMBSY8 /CMBSY9 /Cmbx10 /CMBX10 /Cmbx12 /CMBX12 /Cmbx5 /CMBX5 /Cmbx6 /CMBX6 /Cmbx7 /CMBX7 /Cmbx8 /CMBX8 /Cmbx9 /CMBX9 /Cmbxsl10 /CMBXSL10 /Cmbxti10 /CMBXTI10 /Cmcsc10 /CMCSC10 /Cmcsc8 /CMCSC8 /Cmcsc9 /CMCSC9 /Cmdunh10 /CMDUNH10 /Cmex10 /CMEX10 /CMEX7 /CMEX8 /CMEX9 /Cmff10 /CMFF10 /Cmfi10 /CMFI10 /Cmfib8 /CMFIB8 /Cminch /CMINCH /Cmitt10 /CMITT10 /Cmmi10 /CMMI10 /Cmmi12 /CMMI12 /Cmmi5 /CMMI5 /Cmmi6 /CMMI6 /Cmmi7 /CMMI7 /Cmmi8 /CMMI8 /Cmmi9 /CMMI9 /Cmmib10 /CMMIB10 /CMMIB5 /CMMIB6 /CMMIB7 /CMMIB8 /CMMIB9 /Cmr10 /CMR10 /Cmr12 /CMR12 /Cmr17 /CMR17 /Cmr5 /CMR5 /Cmr6 /CMR6 /Cmr7 /CMR7 /Cmr8 /CMR8 /Cmr9 /CMR9 /Cmsl10 /CMSL10 /Cmsl12 /CMSL12 /Cmsl8 /CMSL8 /Cmsl9 /CMSL9 /Cmsltt10 /CMSLTT10 /Cmss10 /CMSS10 /Cmss12 /CMSS12 /Cmss17 /CMSS17 /Cmss8 /CMSS8 /Cmss9 /CMSS9 /Cmssbx10 /CMSSBX10 /Cmssdc10 /CMSSDC10 /Cmssi10 /CMSSI10 /Cmssi12 /CMSSI12 /Cmssi17 /CMSSI17 /Cmssi8 /CMSSI8 /Cmssi9 /CMSSI9 /Cmssq8 /CMSSQ8 /Cmssqi8 /CMSSQI8 /Cmsy10 /CMSY10 /Cmsy5 /CMSY5 /Cmsy6 /CMSY6 /Cmsy7 /CMSY7 /Cmsy8 /CMSY8 /Cmsy9 /CMSY9 /Cmtcsc10 /CMTCSC10 /Cmtex10 /CMTEX10 /Cmtex8 /CMTEX8 /Cmtex9 /CMTEX9 /Cmti10 /CMTI10 /Cmti12 /CMTI12 /Cmti7 /CMTI7 /Cmti8 /CMTI8 /Cmti9 /CMTI9 /Cmtt10 /CMTT10 /Cmtt12 /CMTT12 /Cmtt8 /CMTT8 /Cmtt9 /CMTT9 /Cmu10 /CMU10 /Cmvtt10 /CMVTT10 /ColonnaMT /Colossalis-Bold /ComicSansMS /ComicSansMS-Bold /Consolas /Consolas-Bold /Consolas-BoldItalic /Consolas-Italic /Constantia /Constantia-Bold /Constantia-BoldItalic /Constantia-Italic /CooperBlack /CopperplateGothic-Bold /CopperplateGothic-Light /Copperplate-ThirtyThreeBC /Corbel /Corbel-Bold /Corbel-BoldItalic /Corbel-Italic /CordiaNew /CordiaNew-Bold /CordiaNew-BoldItalic /CordiaNew-Italic /CordiaUPC /CordiaUPC-Bold /CordiaUPC-BoldItalic /CordiaUPC-Italic /Courier /Courier-Bold /Courier-BoldOblique /CourierNewPS-BoldItalicMT /CourierNewPS-BoldMT /CourierNewPS-ItalicMT /CourierNewPSMT /Courier-Oblique /CourierStd /CourierStd-Bold /CourierStd-BoldOblique /CourierStd-Oblique /CourierX-Bold /CourierX-BoldOblique /CourierX-Oblique /CourierX-Regular /CreepyRegular /CurlzMT /David-Bold /David-Reg /DavidTransparent /Dcb10 /Dcbx10 /Dcbxsl10 /Dcbxti10 /Dccsc10 /Dcitt10 /Dcr10 /Desdemona /DilleniaUPC /DilleniaUPCBold /DilleniaUPCBoldItalic /DilleniaUPCItalic /Dingbats /DomCasual /Dotum /DotumChe /DoulosSIL /EdwardianScriptITC /Elephant-Italic /Elephant-Regular /EngraversGothicBT-Regular /EngraversMT /EraserDust /ErasITC-Bold /ErasITC-Demi /ErasITC-Light /ErasITC-Medium /ErieBlackPSMT /ErieLightPSMT /EriePSMT /EstrangeloEdessa /Euclid /Euclid-Bold /Euclid-BoldItalic /EuclidExtra /EuclidExtra-Bold /EuclidFraktur /EuclidFraktur-Bold /Euclid-Italic /EuclidMathOne /EuclidMathOne-Bold /EuclidMathTwo /EuclidMathTwo-Bold /EuclidSymbol /EuclidSymbol-Bold /EuclidSymbol-BoldItalic /EuclidSymbol-Italic /EucrosiaUPC /EucrosiaUPCBold /EucrosiaUPCBoldItalic /EucrosiaUPCItalic /EUEX10 /EUEX7 /EUEX8 /EUEX9 /EUFB10 /EUFB5 /EUFB7 /EUFM10 /EUFM5 /EUFM7 /EURB10 /EURB5 /EURB7 /EURM10 /EURM5 /EURM7 /EuroMono-Bold /EuroMono-BoldItalic /EuroMono-Italic /EuroMono-Regular /EuroSans-Bold /EuroSans-BoldItalic /EuroSans-Italic /EuroSans-Regular /EuroSerif-Bold /EuroSerif-BoldItalic /EuroSerif-Italic /EuroSerif-Regular /EUSB10 /EUSB5 /EUSB7 /EUSM10 /EUSM5 /EUSM7 /FelixTitlingMT /Fences /FencesPlain /FigaroMT /FixedMiriamTransparent /FootlightMTLight /Formata-Italic /Formata-Medium /Formata-MediumItalic /Formata-Regular /ForteMT /FranklinGothic-Book /FranklinGothic-BookItalic /FranklinGothic-Demi /FranklinGothic-DemiCond /FranklinGothic-DemiItalic /FranklinGothic-Heavy /FranklinGothic-HeavyItalic /FranklinGothicITCbyBT-Book /FranklinGothicITCbyBT-BookItal /FranklinGothicITCbyBT-Demi /FranklinGothicITCbyBT-DemiItal /FranklinGothic-Medium /FranklinGothic-MediumCond /FranklinGothic-MediumItalic /FrankRuehl /FreesiaUPC /FreesiaUPCBold /FreesiaUPCBoldItalic /FreesiaUPCItalic /FreestyleScript-Regular /FrenchScriptMT /Frutiger-Black /Frutiger-BlackCn /Frutiger-BlackItalic /Frutiger-Bold /Frutiger-BoldCn /Frutiger-BoldItalic /Frutiger-Cn /Frutiger-ExtraBlackCn /Frutiger-Italic /Frutiger-Light /Frutiger-LightCn /Frutiger-LightItalic /Frutiger-Roman /Frutiger-UltraBlack /Futura-Bold /Futura-BoldOblique /Futura-Book /Futura-BookOblique /FuturaBT-Bold /FuturaBT-BoldItalic /FuturaBT-Book /FuturaBT-BookItalic /FuturaBT-Medium /FuturaBT-MediumItalic /Futura-Light /Futura-LightOblique /GalliardITCbyBT-Bold /GalliardITCbyBT-BoldItalic /GalliardITCbyBT-Italic /GalliardITCbyBT-Roman /Garamond /Garamond-Bold /Garamond-BoldCondensed /Garamond-BoldCondensedItalic /Garamond-BoldItalic /Garamond-BookCondensed /Garamond-BookCondensedItalic /Garamond-Italic /Garamond-LightCondensed /Garamond-LightCondensedItalic /Gautami /GeometricSlab703BT-Light /GeometricSlab703BT-LightItalic /Georgia /Georgia-Bold /Georgia-BoldItalic /Georgia-Italic /GeorgiaRef /Giddyup /Giddyup-Thangs /Gigi-Regular /GillSans /GillSans-Bold /GillSans-BoldItalic /GillSans-Condensed /GillSans-CondensedBold /GillSans-Italic /GillSans-Light /GillSans-LightItalic /GillSansMT /GillSansMT-Bold /GillSansMT-BoldItalic /GillSansMT-Condensed /GillSansMT-ExtraCondensedBold /GillSansMT-Italic /GillSans-UltraBold /GillSans-UltraBoldCondensed /GloucesterMT-ExtraCondensed /Gothic-Thirteen /GoudyOldStyleBT-Bold /GoudyOldStyleBT-BoldItalic /GoudyOldStyleBT-Italic /GoudyOldStyleBT-Roman</s>
|
<s>/GoudyOldStyleT-Bold /GoudyOldStyleT-Italic /GoudyOldStyleT-Regular /GoudyStout /GoudyTextMT-LombardicCapitals /GSIDefaultSymbols /Gulim /GulimChe /Gungsuh /GungsuhChe /Haettenschweiler /HarlowSolid /Harrington /Helvetica /Helvetica-Black /Helvetica-BlackOblique /Helvetica-Bold /Helvetica-BoldOblique /Helvetica-Condensed /Helvetica-Condensed-Black /Helvetica-Condensed-BlackObl /Helvetica-Condensed-Bold /Helvetica-Condensed-BoldObl /Helvetica-Condensed-Light /Helvetica-Condensed-LightObl /Helvetica-Condensed-Oblique /Helvetica-Fraction /Helvetica-Narrow /Helvetica-Narrow-Bold /Helvetica-Narrow-BoldOblique /Helvetica-Narrow-Oblique /Helvetica-Oblique /HighTowerText-Italic /HighTowerText-Reg /Humanist521BT-BoldCondensed /Humanist521BT-Light /Humanist521BT-LightItalic /Humanist521BT-RomanCondensed /Imago-ExtraBold /Impact /ImprintMT-Shadow /InformalRoman-Regular /IrisUPC /IrisUPCBold /IrisUPCBoldItalic /IrisUPCItalic /Ironwood /ItcEras-Medium /ItcKabel-Bold /ItcKabel-Book /ItcKabel-Demi /ItcKabel-Medium /ItcKabel-Ultra /JasmineUPC /JasmineUPC-Bold /JasmineUPC-BoldItalic /JasmineUPC-Italic /JoannaMT /JoannaMT-Italic /Jokerman-Regular /JuiceITC-Regular /Kartika /Kaufmann /KaufmannBT-Bold /KaufmannBT-Regular /KidTYPEPaint /KinoMT /KodchiangUPC /KodchiangUPC-Bold /KodchiangUPC-BoldItalic /KodchiangUPC-Italic /KorinnaITCbyBT-Regular /KristenITC-Regular /KrutiDev040Bold /KrutiDev040BoldItalic /KrutiDev040Condensed /KrutiDev040Italic /KrutiDev040Thin /KrutiDev040Wide /KrutiDev060 /KrutiDev060Bold /KrutiDev060BoldItalic /KrutiDev060Condensed /KrutiDev060Italic /KrutiDev060Thin /KrutiDev060Wide /KrutiDev070 /KrutiDev070Condensed /KrutiDev070Italic /KrutiDev070Thin /KrutiDev070Wide /KrutiDev080 /KrutiDev080Condensed /KrutiDev080Italic /KrutiDev080Wide /KrutiDev090 /KrutiDev090Bold /KrutiDev090BoldItalic /KrutiDev090Condensed /KrutiDev090Italic /KrutiDev090Thin /KrutiDev090Wide /KrutiDev100 /KrutiDev100Bold /KrutiDev100BoldItalic /KrutiDev100Condensed /KrutiDev100Italic /KrutiDev100Thin /KrutiDev100Wide /KrutiDev120 /KrutiDev120Condensed /KrutiDev120Thin /KrutiDev120Wide /KrutiDev130 /KrutiDev130Condensed /KrutiDev130Thin /KrutiDev130Wide /KunstlerScript /Latha /LatinWide /LetterGothic /LetterGothic-Bold /LetterGothic-BoldOblique /LetterGothic-BoldSlanted /LetterGothicMT /LetterGothicMT-Bold /LetterGothicMT-BoldOblique /LetterGothicMT-Oblique /LetterGothic-Slanted /LevenimMT /LevenimMTBold /LilyUPC /LilyUPCBold /LilyUPCBoldItalic /LilyUPCItalic /Lithos-Black /Lithos-Regular /LotusWPBox-Roman /LotusWPIcon-Roman /LotusWPIntA-Roman /LotusWPIntB-Roman /LotusWPType-Roman /LucidaBright /LucidaBright-Demi /LucidaBright-DemiItalic /LucidaBright-Italic /LucidaCalligraphy-Italic /LucidaConsole /LucidaFax /LucidaFax-Demi /LucidaFax-DemiItalic /LucidaFax-Italic /LucidaHandwriting-Italic /LucidaSans /LucidaSans-Demi /LucidaSans-DemiItalic /LucidaSans-Italic /LucidaSans-Typewriter /LucidaSans-TypewriterBold /LucidaSans-TypewriterBoldOblique /LucidaSans-TypewriterOblique /LucidaSansUnicode /Lydian /Magneto-Bold /MaiandraGD-Regular /Mangal-Regular /Map-Symbols /MathA /MathB /MathC /Mathematica1 /Mathematica1-Bold /Mathematica1Mono /Mathematica1Mono-Bold /Mathematica2 /Mathematica2-Bold /Mathematica2Mono /Mathematica2Mono-Bold /Mathematica3 /Mathematica3-Bold /Mathematica3Mono /Mathematica3Mono-Bold /Mathematica4 /Mathematica4-Bold /Mathematica4Mono /Mathematica4Mono-Bold /Mathematica5 /Mathematica5-Bold /Mathematica5Mono /Mathematica5Mono-Bold /Mathematica6 /Mathematica6Bold /Mathematica6Mono /Mathematica6MonoBold /Mathematica7 /Mathematica7Bold /Mathematica7Mono /Mathematica7MonoBold /MatisseITC-Regular /MaturaMTScriptCapitals /Mesquite /Mezz-Black /Mezz-Regular /MICR /MicrosoftSansSerif /MingLiU /Minion-BoldCondensed /Minion-BoldCondensedItalic /Minion-Condensed /Minion-CondensedItalic /Minion-Ornaments /MinionPro-Bold /MinionPro-BoldIt /MinionPro-It /MinionPro-Regular /Miriam /MiriamFixed /MiriamTransparent /Mistral /Modern-Regular /MonotypeCorsiva /MonotypeSorts /MSAM10 /MSAM5 /MSAM6 /MSAM7 /MSAM8 /MSAM9 /MSBM10 /MSBM5 /MSBM6 /MSBM7 /MSBM8 /MSBM9 /MS-Gothic /MSHei /MSLineDrawPSMT /MS-Mincho /MSOutlook /MS-PGothic /MS-PMincho /MSReference1 /MSReference2 /MSReferenceSansSerif /MSReferenceSansSerif-Bold /MSReferenceSansSerif-BoldItalic /MSReferenceSansSerif-Italic /MSReferenceSerif /MSReferenceSerif-Bold /MSReferenceSerif-BoldItalic /MSReferenceSerif-Italic /MSReferenceSpecialty /MSSong /MS-UIGothic /MT-Extra /MTExtraTiger /MT-Symbol /MT-Symbol-Italic /MVBoli /Myriad-Bold /Myriad-BoldItalic /Myriad-Italic /Myriad-Roman /Narkisim /NewCenturySchlbk-Bold /NewCenturySchlbk-BoldItalic /NewCenturySchlbk-Italic /NewCenturySchlbk-Roman /NewMilleniumSchlbk-BoldItalicSH /NewsGothic /NewsGothic-Bold /NewsGothicBT-Bold /NewsGothicBT-BoldItalic /NewsGothicBT-Italic /NewsGothicBT-Roman /NewsGothic-Condensed /NewsGothic-Italic /NewsGothicMT /NewsGothicMT-Bold /NewsGothicMT-Italic /NiagaraEngraved-Reg /NiagaraSolid-Reg /NimbusMonL-Bold /NimbusMonL-BoldObli /NimbusMonL-Regu /NimbusMonL-ReguObli /NimbusRomNo9L-Medi /NimbusRomNo9L-MediItal /NimbusRomNo9L-Regu /NimbusRomNo9L-ReguItal /NimbusSanL-Bold /NimbusSanL-BoldCond /NimbusSanL-BoldCondItal /NimbusSanL-BoldItal /NimbusSanL-Regu /NimbusSanL-ReguCond /NimbusSanL-ReguCondItal /NimbusSanL-ReguItal /Nimrod /Nimrod-Bold /Nimrod-BoldItalic /Nimrod-Italic /NSimSun /Nueva-BoldExtended /Nueva-BoldExtendedItalic /Nueva-Italic /Nueva-Roman /NuptialScript /OCRA /OCRA-Alternate /OCRAExtended /OCRB /OCRB-Alternate /OfficinaSans-Bold /OfficinaSans-BoldItalic /OfficinaSans-Book /OfficinaSans-BookItalic /OfficinaSerif-Bold /OfficinaSerif-BoldItalic /OfficinaSerif-Book /OfficinaSerif-BookItalic /OldEnglishTextMT /Onyx /OnyxBT-Regular /OzHandicraftBT-Roman /PalaceScriptMT /Palatino-Bold /Palatino-BoldItalic /Palatino-Italic /PalatinoLinotype-Bold /PalatinoLinotype-BoldItalic /PalatinoLinotype-Italic /PalatinoLinotype-Roman /Palatino-Roman /PapyrusPlain /Papyrus-Regular /Parchment-Regular /Parisian /ParkAvenue /Penumbra-SemiboldFlare /Penumbra-SemiboldSans /Penumbra-SemiboldSerif /PepitaMT /Perpetua /Perpetua-Bold /Perpetua-BoldItalic /Perpetua-Italic /PerpetuaTitlingMT-Bold /PerpetuaTitlingMT-Light /PhotinaCasualBlack /Playbill /PMingLiU /Poetica-SuppOrnaments /PoorRichard-Regular /PopplLaudatio-Italic /PopplLaudatio-Medium /PopplLaudatio-MediumItalic /PopplLaudatio-Regular /PrestigeElite /Pristina-Regular /PTBarnumBT-Regular /Raavi /RageItalic /Ravie /RefSpecialty /Ribbon131BT-Bold /Rockwell /Rockwell-Bold /Rockwell-BoldItalic /Rockwell-Condensed /Rockwell-CondensedBold /Rockwell-ExtraBold /Rockwell-Italic /Rockwell-Light /Rockwell-LightItalic /Rod /RodTransparent /RunicMT-Condensed /Sanvito-Light /Sanvito-Roman /ScriptC /ScriptMTBold /SegoeUI /SegoeUI-Bold /SegoeUI-BoldItalic /SegoeUI-Italic /Serpentine-BoldOblique /ShelleyVolanteBT-Regular /ShowcardGothic-Reg /Shruti /SILDoulosIPA /SimHei /SimSun /SimSun-PUA /SnapITC-Regular /StandardSymL /Stencil /StoneSans /StoneSans-Bold /StoneSans-BoldItalic /StoneSans-Italic /StoneSans-Semibold /StoneSans-SemiboldItalic /Stop /Swiss721BT-BlackExtended /Sylfaen /Symbol /SymbolMT /SymbolTiger /SymbolTigerExpert /Tahoma /Tahoma-Bold /Tci1 /Tci1Bold /Tci1BoldItalic /Tci1Italic /Tci2 /Tci2Bold /Tci2BoldItalic /Tci2Italic /Tci3 /Tci3Bold /Tci3BoldItalic /Tci3Italic /Tci4 /Tci4Bold /Tci4BoldItalic /Tci4Italic /TechnicalItalic /TechnicalPlain /Tekton /Tekton-Bold /TektonMM /Tempo-HeavyCondensed /Tempo-HeavyCondensedItalic /TempusSansITC /Tiger /TigerExpert /Times-Bold /Times-BoldItalic /Times-BoldItalicOsF /Times-BoldSC /Times-ExtraBold /Times-Italic /Times-ItalicOsF /TimesNewRomanMT-ExtraBold /TimesNewRomanPS-BoldItalicMT /TimesNewRomanPS-BoldMT /TimesNewRomanPS-ItalicMT /TimesNewRomanPSMT /Times-Roman /Times-RomanSC /Trajan-Bold /Trebuchet-BoldItalic /TrebuchetMS /TrebuchetMS-Bold /TrebuchetMS-Italic /Tunga-Regular /TwCenMT-Bold /TwCenMT-BoldItalic /TwCenMT-Condensed /TwCenMT-CondensedBold /TwCenMT-CondensedExtraBold /TwCenMT-CondensedMedium /TwCenMT-Italic /TwCenMT-Regular /Univers-Bold /Univers-BoldItalic /UniversCondensed-Bold /UniversCondensed-BoldItalic /UniversCondensed-Medium /UniversCondensed-MediumItalic /Univers-Medium /Univers-MediumItalic /URWBookmanL-DemiBold /URWBookmanL-DemiBoldItal /URWBookmanL-Ligh /URWBookmanL-LighItal /URWChanceryL-MediItal /URWGothicL-Book /URWGothicL-BookObli /URWGothicL-Demi /URWGothicL-DemiObli /URWPalladioL-Bold /URWPalladioL-BoldItal /URWPalladioL-Ital /URWPalladioL-Roma /USPSBarCode /VAGRounded-Black /VAGRounded-Bold /VAGRounded-Light /VAGRounded-Thin /Verdana /Verdana-Bold /Verdana-BoldItalic /Verdana-Italic /VerdanaRef /VinerHandITC /Viva-BoldExtraExtended /Vivaldii</s>
|
<s>/Viva-LightCondensed /Viva-Regular /VladimirScript /Vrinda /Webdings /Westminster /Willow /Wingdings2 /Wingdings3 /Wingdings-Regular /WNCYB10 /WNCYI10 /WNCYR10 /WNCYSC10 /WNCYSS10 /WoodtypeOrnaments-One /WoodtypeOrnaments-Two /WP-ArabicScriptSihafa /WP-ArabicSihafa /WP-BoxDrawing /WP-CyrillicA /WP-CyrillicB /WP-GreekCentury /WP-GreekCourier /WP-GreekHelve /WP-HebrewDavid /WP-IconicSymbolsA /WP-IconicSymbolsB /WP-Japanese /WP-MathA /WP-MathB /WP-MathExtendedA /WP-MathExtendedB /WP-MultinationalAHelve /WP-MultinationalARoman /WP-MultinationalBCourier /WP-MultinationalBHelve /WP-MultinationalBRoman /WP-MultinationalCourier /WP-Phonetic /WPTypographicSymbols /XYATIP10 /XYBSQL10 /XYBTIP10 /XYCIRC10 /XYCMAT10 /XYCMBT10 /XYDASH10 /XYEUAT10 /XYEUBT10 /ZapfChancery-MediumItalic /ZapfDingbats /ZapfHumanist601BT-Bold /ZapfHumanist601BT-BoldItalic /ZapfHumanist601BT-Demi /ZapfHumanist601BT-DemiItalic /ZapfHumanist601BT-Italic /ZapfHumanist601BT-Roman /ZWAdobeF /NeverEmbed [ true /AntiAliasColorImages false /CropColorImages true /ColorImageMinResolution 150 /ColorImageMinResolutionPolicy /OK /DownsampleColorImages true /ColorImageDownsampleType /Bicubic /ColorImageResolution 300 /ColorImageDepth -1 /ColorImageMinDownsampleDepth 1 /ColorImageDownsampleThreshold 2.00333 /EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages true /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /ColorImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 150 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 2.00333 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /GrayImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 1200 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 600 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.00167 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1 /AllowPSXObjects false /CheckCompliance [ /None /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXOutputIntentProfile (None) /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False /CreateJDFFile false /Description << /ARA <FEFF06270633062A062E062F0645002006470630064700200627064406250639062F0627062F0627062A002006440625064606340627062100200648062B062706260642002000410064006F00620065002000500044004600200645062A064806270641064206290020064506390020064506420627064A064A0633002006390631063600200648063706280627063906290020062706440648062B0627062606420020062706440645062A062F062706480644062900200641064A00200645062C062706440627062A002006270644062306390645062706440020062706440645062E062A064406410629061B0020064A06450643064600200641062A062D00200648062B0627062606420020005000440046002006270644064506460634062306290020062806270633062A062E062F062706450020004100630072006F0062006100740020064800410064006F006200650020005200650061006400650072002006250635062F0627063100200035002E0030002006480627064406250635062F062706310627062A0020062706440623062D062F062B002E> /CHS <FEFF4f7f75288fd94e9b8bbe5b9a521b5efa7684002000410064006f006200650020005000440046002065876863900275284e8e55464e1a65876863768467e5770b548c62535370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c676562535f00521b5efa768400200050004400460020658768633002> /CHT <FEFF4f7f752890194e9b8a2d7f6e5efa7acb7684002000410064006f006200650020005000440046002065874ef69069752865bc666e901a554652d965874ef6768467e5770b548c52175370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c4f86958b555f5df25efa7acb76840020005000440046002065874ef63002> /CZE <FEFF005400610074006f0020006e006100730074006100760065006e00ed00200070006f0075017e0069006a007400650020006b0020007600790074007600e101590065006e00ed00200064006f006b0075006d0065006e0074016f002000410064006f006200650020005000440046002000760068006f0064006e00fd00630068002000700072006f002000730070006f006c00650068006c0069007600e90020007a006f006200720061007a006f007600e1006e00ed002000610020007400690073006b0020006f006200630068006f0064006e00ed0063006800200064006f006b0075006d0065006e0074016f002e002000200056007900740076006f01590065006e00e900200064006f006b0075006d0065006e007400790020005000440046002000620075006400650020006d006f017e006e00e90020006f007400650076015900ed007400200076002000700072006f006700720061006d0065006300680020004100630072006f00620061007400200061002000410064006f00620065002000520065006100640065007200200035002e0030002000610020006e006f0076011b006a016100ed00630068002e> /DAN <FEFF004200720075006700200069006e0064007300740069006c006c0069006e006700650072006e0065002000740069006c0020006100740020006f007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400650072002c0020006400650072002000650067006e006500720020007300690067002000740069006c00200064006500740061006c006a006500720065007400200073006b00e60072006d007600690073006e0069006e00670020006f00670020007500640073006b007200690076006e0069006e006700200061006600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020004400650020006f007000720065007400740065006400650020005000440046002d0064006f006b0075006d0065006e0074006500720020006b0061006e002000e50062006e00650073002000690020004100630072006f00620061007400200065006c006c006500720020004100630072006f006200610074002000520065006100640065007200200035002e00300020006f00670020006e0079006500720065002e> /DEU <FEFF00560065007200770065006e00640065006e0020005300690065002000640069006500730065002000450069006e007300740065006c006c0075006e00670065006e0020007a0075006d002000450072007300740065006c006c0065006e00200076006f006e002000410064006f006200650020005000440046002d0044006f006b0075006d0065006e00740065006e002c00200075006d002000650069006e00650020007a0075007600650072006c00e40073007300690067006500200041006e007a006500690067006500200075006e00640020004100750073006700610062006500200076006f006e00200047006500730063006800e40066007400730064006f006b0075006d0065006e00740065006e0020007a0075002000650072007a00690065006c0065006e002e00200044006900650020005000440046002d0044006f006b0075006d0065006e007400650020006b00f6006e006e0065006e0020006d006900740020004100630072006f00620061007400200075006e0064002000520065006100640065007200200035002e003000200075006e00640020006800f600680065007200200067006500f600660066006e00650074002000770065007200640065006e002e> /ESP <FEFF005500740069006c0069006300650020006500730074006100200063006f006e0066006900670075007200610063006900f3006e0020007000610072006100200063007200650061007200200064006f00630075006d0065006e0074006f0073002000640065002000410064006f00620065002000500044004600200061006400650063007500610064006f007300200070006100720061002000760069007300750061006c0069007a00610063006900f3006e0020006500200069006d0070007200650073006900f3006e00200064006500200063006f006e006600690061006e007a006100200064006500200064006f00630075006d0065006e0074006f007300200063006f006d00650072006300690061006c00650073002e002000530065002000700075006500640065006e00200061006200720069007200200064006f00630075006d0065006e0074006f00730020005000440046002000630072006500610064006f007300200063006f006e0020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e003000200079002000760065007200730069006f006e0065007300200070006f00730074006500720069006f007200650073002e> /FRA <FEFF005500740069006c006900730065007a00200063006500730020006f007000740069006f006e00730020006100660069006e00200064006500200063007200e900650072002000640065007300200064006f00630075006d0065006e00740073002000410064006f006200650020005000440046002000700072006f00660065007300730069006f006e006e0065006c007300200066006900610062006c0065007300200070006f007500720020006c0061002000760069007300750061006c00690073006100740069006f006e0020006500740020006c00270069006d007000720065007300730069006f006e002e0020004c0065007300200064006f00630075006d0065006e00740073002000500044004600200063007200e900e90073002000700065007500760065006e0074002000ea0074007200650020006f007500760065007200740073002000640061006e00730020004100630072006f006200610074002c002000610069006e00730069002000710075002700410064006f00620065002000520065006100640065007200200035002e0030002000650074002000760065007200730069006f006e007300200075006c007400e90072006900650075007200650073002e> /GRE <FEFF03a703c103b703c303b903bc03bf03c003bf03b903ae03c303c403b5002003b103c503c403ad03c2002003c403b903c2002003c103c503b803bc03af03c303b503b903c2002003b303b903b1002003bd03b1002003b403b703bc03b903bf03c503c103b303ae03c303b503c403b5002003ad03b303b303c103b103c603b1002000410064006f006200650020005000440046002003ba03b103c403ac03bb03bb03b703bb03b1002003b303b903b1002003b103be03b903cc03c003b903c303c403b7002003c003c103bf03b203bf03bb03ae002003ba03b103b9002003b503ba03c403cd03c003c903c303b7002003b503c003b903c703b503b903c103b703bc03b103c403b903ba03ce03bd002003b503b303b303c103ac03c603c903bd002e0020002003a403b10020005000440046002003ad03b303b303c103b103c603b1002003c003bf03c5002003ad03c703b503c403b5002003b403b703bc03b903bf03c503c103b303ae03c303b503b9002003bc03c003bf03c103bf03cd03bd002003bd03b1002003b103bd03bf03b903c703c403bf03cd03bd002003bc03b5002003c403bf0020004100630072006f006200610074002c002003c403bf002000410064006f00620065002000520065006100640065007200200035002e0030002003ba03b103b9002003bc03b503c403b103b303b503bd03ad03c303c403b503c103b503c2002003b503ba03b403cc03c303b503b903c2002e> /HEB <FEFF05D405E905EA05DE05E905D5002005D105D405D205D305E805D505EA002005D005DC05D4002005DB05D305D9002005DC05D905E605D505E8002005DE05E105DE05DB05D9002000410064006F006200650020005000440046002005E205D105D505E8002005D405E605D205D4002005D505D405D305E405E105D4002005D005DE05D905E005D4002005E905DC002005DE05E105DE05DB05D905DD002005E205E105E705D905D905DD002E002005DE05E105DE05DB05D90020005000440046002005E905E005D505E605E805D5002005E005D905EA05E005D905DD002005DC05E405EA05D905D705D4002005D105D005DE05E605E205D505EA0020004100630072006F006200610074002005D5002D00410064006F00620065002000520065006100640065007200200035002E0030002005D505D205E805E105D005D505EA002005DE05EA05E705D305DE05D505EA002005D905D505EA05E8002E05D905D505EA05E8002E002D0033002C002005E205D905D905E005D5002005D105DE05D305E805D905DA002005DC05DE05E905EA05DE05E9002005E905DC0020004100630072006F006200610074002E002005DE05E105DE05DB05D90020005000440046002005E905E005D505E605E805D5002005E005D905EA05E005D905DD002005DC05E405EA05D905D705D4002005D105D005DE05E605E205D505EA0020004100630072006F006200610074002005D5002D00410064006F00620065002000520065006100640065007200200035002E0030002005D505D205E805E105D005D505EA002005DE05EA05E705D305DE05D505EA002005D905D505EA05E8002E> /HRV (Za stvaranje Adobe PDF dokumenata pogodnih za pouzdani prikaz i ispis poslovnih dokumenata koristite ove postavke. Stvoreni PDF dokumenti mogu se otvoriti Acrobat i Adobe Reader 5.0 i kasnijim verzijama.) /HUN <FEFF00410020006800690076006100740061006c006f007300200064006f006b0075006d0065006e00740075006d006f006b0020006d00650067006200ed007a00680061007400f30020006d0065006700740065006b0069006e007400e9007300e900720065002000e900730020006e0079006f006d00740061007400e1007300e10072006100200073007a00e1006e0074002000410064006f00620065002000500044004600200064006f006b0075006d0065006e00740075006d006f006b0061007400200065007a0065006b006b0065006c0020006100200062006500e1006c006c00ed007400e10073006f006b006b0061006c00200068006f007a006800610074006a00610020006c00e9007400720065002e0020002000410020006c00e90074007200650068006f007a006f00740074002000500044004600200064006f006b0075006d0065006e00740075006d006f006b00200061007a0020004100630072006f006200610074002000e9007300200061007a002000410064006f00620065002000520065006100640065007200200035002e0030002c0020007600610067007900200061007a002000610074007400f3006c0020006b00e9007301510062006200690020007600650072007a006900f3006b006b0061006c0020006e00790069007400680061007400f3006b0020006d00650067002e> /ITA (Utilizzare queste impostazioni per creare documenti Adobe PDF adatti per visualizzare e stampare documenti aziendali in modo affidabile. I documenti PDF creati possono essere aperti con Acrobat e Adobe Reader 5.0 e versioni successive.) /JPN <FEFF30d330b830cd30b9658766f8306e8868793a304a3088307353705237306b90693057305f002000410064006f0062006500200050004400460020658766f8306e4f5c6210306b4f7f75283057307e305930023053306e8a2d5b9a30674f5c62103055308c305f0020005000440046002030d530a130a430eb306f3001004100630072006f0062006100740020304a30883073002000410064006f00620065002000520065006100640065007200200035002e003000204ee5964d3067958b304f30533068304c3067304d307e305930023053306e8a2d5b9a3067306f30d530a930f330c8306e57cb30818fbc307f3092884c3044307e30593002> /KOR <FEFFc7740020c124c815c7440020c0acc6a9d558c5ec0020be44c988b2c8c2a40020bb38c11cb97c0020c548c815c801c73cb85c0020bcf4ace00020c778c1c4d558b2940020b3700020ac00c7a50020c801d569d55c002000410064006f0062006500200050004400460020bb38c11cb97c0020c791c131d569b2c8b2e4002e0020c774b807ac8c0020c791c131b41c00200050004400460020bb38c11cb2940020004100630072006f0062006100740020bc0f002000410064006f00620065002000520065006100640065007200200035002e00300020c774c0c1c5d0c11c0020c5f40020c2180020c788c2b5b2c8b2e4002e> /NLD (Gebruik deze instellingen om Adobe PDF-documenten te maken waarmee zakelijke documenten betrouwbaar kunnen worden weergegeven en afgedrukt. De gemaakte PDF-documenten kunnen worden geopend met Acrobat en Adobe Reader 5.0 en hoger.) /NOR <FEFF004200720075006b00200064006900730073006500200069006e006e007300740069006c006c0069006e00670065006e0065002000740069006c002000e50020006f0070007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e00740065007200200073006f006d002000650072002000650067006e0065007400200066006f00720020007000e5006c006900740065006c006900670020007600690073006e0069006e00670020006f00670020007500740073006b007200690066007400200061007600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020005000440046002d0064006f006b0075006d0065006e00740065006e00650020006b0061006e002000e50070006e00650073002000690020004100630072006f00620061007400200065006c006c00650072002000410064006f00620065002000520065006100640065007200200035002e003000200065006c006c00650072002e> /POL <FEFF0055007300740061007700690065006e0069006100200064006f002000740077006f0072007a0065006e0069006100200064006f006b0075006d0065006e007400f300770020005000440046002000700072007a0065007a006e00610063007a006f006e00790063006800200064006f0020006e00690065007a00610077006f0064006e00650067006f002000770079015b0077006900650074006c0061006e00690061002000690020006400720075006b006f00770061006e0069006100200064006f006b0075006d0065006e007400f300770020006600690072006d006f0077007900630068002e002000200044006f006b0075006d0065006e0074007900200050004400460020006d006f017c006e00610020006f007400770069006500720061010700200077002000700072006f006700720061006d006900650020004100630072006f00620061007400200069002000410064006f00620065002000520065006100640065007200200035002e0030002000690020006e006f00770073007a0079006d002e> /PTB <FEFF005500740069006c0069007a006500200065007300730061007300200063006f006e00660069006700750072006100e700f50065007300200064006500200066006f0072006d00610020006100200063007200690061007200200064006f00630075006d0065006e0074006f0073002000410064006f00620065002000500044004600200061006400650071007500610064006f00730020007000610072006100200061002000760069007300750061006c0069007a006100e700e3006f002000650020006100200069006d0070007200650073007300e3006f00200063006f006e0066006900e1007600650069007300200064006500200064006f00630075006d0065006e0074006f007300200063006f006d0065007200630069006100690073002e0020004f007300200064006f00630075006d0065006e0074006f00730020005000440046002000630072006900610064006f007300200070006f00640065006d0020007300650072002000610062006500720074006f007300200063006f006d0020006f0020004100630072006f006200610074002000650020006f002000410064006f00620065002000520065006100640065007200200035002e0030002000650020007600650072007300f50065007300200070006f00730074006500720069006f007200650073002e> /RUM <FEFF005500740069006c0069007a00610163006900200061006300650073007400650020007300650074010300720069002000700065006e007400720075002000610020006300720065006100200064006f00630075006d0065006e00740065002000410064006f006200650020005000440046002000610064006500630076006100740065002000700065006e007400720075002000760069007a00750061006c0069007a00610072006500610020015f006900200074006900700103007200690072006500610020006c0061002000630061006c006900740061007400650020007300750070006500720069006f0061007201030020006100200064006f00630075006d0065006e00740065006c006f007200200064006500200061006600610063006500720069002e002000200044006f00630075006d0065006e00740065006c00650020005000440046002000630072006500610074006500200070006f00740020006600690020006400650073006300680069007300650020006300750020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e00300020015f00690020007600650072007300690075006e0069006c006500200075006c0074006500720069006f006100720065002e> /RUS <FEFF04180441043f043e043b044c04370443043904420435002004340430043d043d044b04350020043d0430044104420440043e0439043a043800200434043b044f00200441043e043704340430043d0438044f00200434043e043a0443043c0435043d0442043e0432002000410064006f006200650020005000440046002c0020043f043e04340445043e0434044f04490438044500200434043b044f0020043d0430043404350436043d043e0433043e0020043f0440043e0441043c043e044204400430002004380020043f04350447043004420438002004340435043b043e0432044b044500200434043e043a0443043c0435043d0442043e0432002e002000200421043e043704340430043d043d044b04350020005000440046002d0434043e043a0443043c0435043d0442044b0020043c043e0436043d043e0020043e0442043a0440044b043204300442044c002004410020043f043e043c043e0449044c044e0020004100630072006f00620061007400200438002000410064006f00620065002000520065006100640065007200200035002e00300020043800200431043e043b043504350020043f043e04370434043d043804450020043204350440044104380439002e> /SLV <FEFF005400650020006e006100730074006100760069007400760065002000750070006f0072006100620069007400650020007a00610020007500730074007600610072006a0061006e006a006500200064006f006b0075006d0065006e0074006f0076002000410064006f006200650020005000440046002c0020007000720069006d00650072006e006900680020007a00610020007a0061006e00650073006c006a00690076006f0020006f0067006c00650064006f00760061006e006a006500200069006e0020007400690073006b0061006e006a006500200070006f0073006c006f0076006e0069006800200064006f006b0075006d0065006e0074006f0076002e00200020005500730074007600610072006a0065006e006500200064006f006b0075006d0065006e0074006500200050004400460020006a00650020006d006f0067006f010d00650020006f0064007000720065007400690020007a0020004100630072006f00620061007400200069006e002000410064006f00620065002000520065006100640065007200200035002e003000200069006e0020006e006f00760065006a01610069006d002e> /SUO <FEFF004b00e40079007400e40020006e00e40069007400e4002000610073006500740075006b007300690061002c0020006b0075006e0020006c0075006f0074002000410064006f0062006500200050004400460020002d0064006f006b0075006d0065006e007400740065006a0061002c0020006a006f0074006b006100200073006f0070006900760061007400200079007200690074007900730061007300690061006b00690072006a006f006a0065006e0020006c0075006f00740065007400740061007600610061006e0020006e00e400790074007400e4006d0069007300650065006e0020006a0061002000740075006c006f007300740061006d0069007300650065006e002e0020004c0075006f0064007500740020005000440046002d0064006f006b0075006d0065006e00740069007400200076006f0069006400610061006e0020006100760061007400610020004100630072006f0062006100740069006c006c00610020006a0061002000410064006f00620065002000520065006100640065007200200035002e0030003a006c006c00610020006a006100200075007500640065006d006d0069006c006c0061002e> /SVE <FEFF0041006e007600e4006e00640020006400650020006800e4007200200069006e0073007400e4006c006c006e0069006e006700610072006e00610020006f006d002000640075002000760069006c006c00200073006b006100700061002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400200073006f006d00200070006100730073006100720020006600f60072002000740069006c006c006600f60072006c00690074006c006900670020007600690073006e0069006e00670020006f006300680020007500740073006b007200690066007400650072002000610076002000610066006600e4007200730064006f006b0075006d0065006e0074002e002000200053006b006100700061006400650020005000440046002d0064006f006b0075006d0065006e00740020006b0061006e002000f600700070006e00610073002000690020004100630072006f0062006100740020006f00630068002000410064006f00620065002000520065006100640065007200200035002e00300020006f00630068002000730065006e006100720065002e> /TUR <FEFF005400690063006100720069002000620065006c00670065006c006500720069006e0020006700fc00760065006e0069006c0069007200200062006900720020015f0065006b0069006c006400650020006700f6007200fc006e007400fc006c0065006e006d006500730069002000760065002000790061007a0064013100720131006c006d006100730131006e006100200075007900670075006e002000410064006f006200650020005000440046002000620065006c00670065006c0065007200690020006f006c0075015f007400750072006d0061006b0020006900e70069006e00200062007500200061007900610072006c0061007201310020006b0075006c006c0061006e0131006e002e00200020004f006c0075015f0074007500720075006c0061006e0020005000440046002000620065006c00670065006c0065007200690020004100630072006f006200610074002000760065002000410064006f00620065002000520065006100640065007200200035002e003000200076006500200073006f006e0072006100730131006e00640061006b00690020007300fc007200fc006d006c00650072006c00650020006100e70131006c006100620069006c00690072002e> /ENU (Use these settings to create Adobe PDF documents suitable for reliable viewing and printing of business documents. Created PDF documents can be opened with Acrobat and Adobe Reader 5.0 and later.)>> setdistillerparams /HWResolution [600 600] /PageSize [612.000 792.000]>> setpagedevice</s>
|
<s>Continuous Bengali Speech Recognition Based On Deep Neural Network2019 International Conference on Electrical, Computer and Communication Engineering (ECCE), 7-9 February, 2019Continuous Bengali Speech Recognition Based OnDeep Neural NetworkMd. Alif Al Amin∗, Md. Towhidul Islam†, Shafkat Kibria‡ and Mohammad Shahidur Rahman§Department of Computer Science and EngineeringShahjalal University of Science and TechnologySylhet-3114, BangladeshEmail: ∗alifalamin4@gmail.com, †tuhintowhidul9@gmail.com, ‡shafkat80@gmail.com, §rahmanms.bd@gmail.comAbstract—Nowadays, deep learning is the most reliable ap-proaches in the field of speech recognition to do the Acousticmodeling. Working with a language like ”Bengali” that is notvery resource-rich in terms of availability of parallel data (i.e.speech with aligned text) is a challenging problem. Also, there arelots of approaches going with deep learning to achieve better per-formance in Bengali Language without benchmarking a specificcorpus. So, the achieved results are biased. In this paper, DNN-HMM and GMM-HMM based models have been used, whichhave been implemented in Kaldi toolkit, for continuous Bengalispeech recognition benchmarking on a standard and publiclypublished corpus called SHRUTI. Previously, the best word errorrate (WER) had been achieved on SHRUTI was 15% using CMU-SPHINX based GMM-HMM and this study has been shown thatusing Kaldi based feature extraction recipes with DNN-HMMand GMM-HMM acoustic models have achieved performancesWER 0.92% and WER 2.02% respectively. Another finding ofthis study is, the WERs of both models are very close becausethe size of the corpus is small.Index Terms—Speech Recognition; DNN-HMM; GMM-HMM;Word Error Rate(WER)I. INTRODUCTIONAbout 260 millions people speak Bengali as their nativelanguage [1]. It is a noticeable amount of the world’s pop-ulation. But it is a matter of regret that quantity as well asquality research works on Bengali speech recognition is veryfew. Nowadays speech is the smartest communication mediumbetween human and a machine. This communication will bemore effective if and only if the human is comfortable tocommunicate with machines through their mother language.Effective researches on Bengali speech recognition is a cryingneed.Speech recognition is a technique of translating a speechdata into a related document or decoding human voice inmachine-readable content. In speech recognition researches,deep learning approach is a very hot topic nowadays. Theremust be something special that every NLP researchers aretrying to use deep learning. Deep learning approaches are be-ing popular as it is outperforming many previously developedmodels for speech recognition. This is the age where CPU’sare replaced by GPU’s. So, it is very easy to train huge models.Multi-threading process can be used to multiple GPUs andCPU’s. It encourages our researchers to use machine learningapproaches.Many pieces of research on speech recognition are goingon but maintaining the decoding accuracy is challenging tothe researchers. Feature extraction, speaker normalization,acoustic modeling, language modeling etc. are challengingworks in speech recognition. However, in latest days ASRsystems are getting very elite performances with the utilizationof open research apparatuses like Kaldi, SPHINX, CMU LM,and HTK etc.So, the building of the ASR system is much easier thanbefore. Among those toolkits, Kaldi is the most popular toolkitand has got the most varieties of implementation of acousticmodels [2]. That’s why Kaldi has been selected as the workingtoolkit on this study. GMM-HMM models and DNN-HMM-based models have been used which are implemented in Kaldi[3], [4].</s>
|
<s>There are two types of DNN implementation inKaldi. The first one is DBN (Deep Belief Networks) with pre-trained RBM (Restricted Boltzmann Machine) implementedby Karel. The other one is DNN (greedy layer-wise supervisedtraining) implemented by Dan. The second one is the latestimplementation of DNN. So the second one has been used.GMM-HMM-based models were the best performing acousticmodels but now DNN models are outperforming GMM-HMM-based models [5], [6].Several researches in 2009 showed that DNN acousticmodel outperformed the best published recognition results onTIMIT, which is a benchmark dataset in English Languagefor testing new algorithms to Speech Recognition [7], [8].There are several other corpora available on English Languagefor evaluating latest algorithms performances. DNN basedacoustic modeling have outperformed on most of the corporathat have more than 100 hours of data [4]. Whereas, thereare no such benchmarking dataset for Bengali Language.Over last few years, there are several new approaches havebeen attempted on several different published and unpublisheddatasets and achieved several better performances, which arebiased because the results are not benchmarking on a specificdataset [9] “in press” [10]–[12].By using SHRUTI Bengali speech corpus, 21.64 hoursphonetic transcripted data has been processed for building aBengali transcripted standard corpus [13]. Then The corpushas been prepared for Kaldi for training. Total 143 dimen-sional feature vectors has been extracted using MFCC featureextraction method. LDA, MLLT and SAT have been used978-1-5386-9111-3/19/$31.00 c©2019 IEEEfor speaker adaptation. Two hybrid models (GMM-HMMand DNN-HMM) have been applied for continuous Bengalispeech recognition and performance have been shown by thementioned models are very satisfactory.II. RELATED WORKSSpeech Recognition researches on many languages are verymuch ahead than Bengali. There are very few works on Ben-gali speech recognition. A team worked on continuous Bengalispeech recognition. They used the Application ProgrammingInterface (SAPI) of Microsoft Corporation [14]. But the corpussize is very small consisting of 270 words. For one to oneEnglish to Bangla relationship, their rate of recognition is58.22% and for one to many relationships, their recognitionaccuracy rate is 74.81%.A team of researchers did a nice work on Isolated and Con-tinuous Bangla Speech Recognition [15]. They used a HiddenMarkov model (HMM) classifier and they tried to recognizeboth Isolated and Continuous words. They used unique 100Bangla words. For isolated word and speaker dependent andspeaker independent rate of recognition respectively are 90%and 70%.Another research is implemented Back Propagation NeuralNetwork for recognizing just Bengali digits [16]. Speakerdependent accuracy was 96.3% and independent accuracywas 92%. The accuracy of their work was good enoughbut the corpus was very little for real-world implementation.Automatic recognition of real numbers was implemented by abrilliant team using CMU-SPHINX. Their accuracy was 85%for personal computer and 75% for android mobile [17].Recently, some excellent works in several languages havebeen done. A work on continuous Hindi speech recognition isdone by a research team. Their dataset consists of 1000 uniquesentences [18]. Their WER (Word Error Rate) was better thanmany previous works on Hindi.There is a good research work done on continuous Serbianspeech recognition. They have 90 hours of speech data and21000 of utterances [19]. They got very satisfactory results.Their WER of GMM-HMM is 2.19% and DNN(for 3 hiddenlayers)</s>
|
<s>is 1.86%.III. FEATURE EXTRACTIONMel-Frequency Cepstral Co-efficient(MFCC) has been se-lected for feature extraction technique. MFCC can mimichuman voices and perceptions. It is the most popular featureextraction techniques. PLP is also a very popular techniquefor feature extraction. PLP performs better for a noisy dataset[20]. As the dataset was clean enough, MFCC is preferablefor the experiment.After using conventional MFCC, More improved feature ex-traction technique has been used. We have got 13-dimensionalvectors across 11 frames to get the 143-dimensional featurevectors. A conventional MFCC derivation has been shown inFigure 1.After that, LDA (Linear Discriminant Analysis) has beenapplied for de-correlation and dimensionality reduction. ForFig. 1. Mel Frequency Cepstral Coefficients Derivationmore precise features MLLT has been used over it. Fornormalization of inter-speaker variability used fMLLR. Nowan improved feature extraction technique is created by usingMFCC on top of LDA + MLLT + fMLLR [21]. In Figure 2,Improved feature extraction technique has been shown.Fig. 2. Improved Output from MFCCIV. GMM-HMM MODELINGHidden Markov model is the most effective and simplestclassifier having so many applications as well [22]. In thecontext of speech recognition, speech data has been taken andextracted the features out of it. Then a system consisting of anacoustic model (HMM), phonetic model, language model andsearch space can generate a predicted output from the giventraining data.Fig. 3. Conventional Hidden Markov ModelThe whole process is shown in Figure 3. After extractionof features, a sequence of fixed size acoustic vector has beenfound, Y1:T = y1, y2, y3..........yT For decoding the audio fileto the sequence of words, it needs to determine as a sequenceof words (Equation 1).W1:L = w1, w2, w3.....wL (1)W denotes the most likely word sequence. W ∗ is given byP (W |Y ) (Equation 2).W ∗ = argmaxP (W |Y ) (2)P (W |Y ) has been calculated by using Bayes theorem (Equa-tion 3).P (W |Y ) = argmaxP (Y |W ) ∗ P (W )/P (Y ) (3)Now W ∗ can be written (Equation 4). As the other valuesof right side have been determined.W ∗ = argmaxP (Y |W ) ∗ P (W ) (4)P (Y |W ) can be determined from acoustic model andP (W ) from language model. Every words has been decom-posed into a sequence of phones, (Qk)w = (q1)w, (q2)w(qk)wfor acoustic model. Now the relation can be derived (Equation5).P (Y |W ) = sum(P (Y |Q) ∗ P (Q|W )) (5)Now it can move through a Markov chain with the besttransition probability. For the output distribution, it can calcu-late multivariate Gaussians. That’s why its called GMM-HMMmodel.Nowadays, researchers are working with sGMM (subspaceGaussian Mixture Models) [23]. This types of modeling allowbetter representation of data and results for the small amountof data. sGMM has also been used in this experiment.V. DNN-HMM MODELINGThe setup of GMM-HMM is used for the basis for trainingthe DNN-HMM model. It has been mentioned that Dan’simplementation of DNN has been used which is greedy layer-wise supervised training. First of all, The network has beeninitiated with the input layer, one hidden layer, and a softmaxlayer (figure 4).Then the system has been trained for a</s>
|
<s>small amount of time(5 iterations). After that, the soft-max layer has been removedand another hidden layer has been added with different sets ofweight. This process has been done for 3 times to get DNN.After all of it, we got a shiny DNN. About 20 iterations havebeen performed for training and then more 10 iterations with aconstant learning rate. At first, the learning rate was 0.02 andfinally, it constant in 0.004. There are many implementationsof DNN for acoustic modeling. This is one of the commonapproaches to DNN implementation. GMM has been replacedfrom GMM-HMM model with DNN for building DNN-HMMhybrid model.Fig. 4. DNN implementationVI. DATA PREPARATIONIn this study, SHRUTI Bengali speech corpus has been used[13]. This corpus has been created by some of the students ofIndian Institute of Technology, Kharagpur. They have workedwith the recognition of vowel, words and some other corpusanalysis. The whole corpus size and speaker details are shownin the tabular form.Unique Words Utterances Speaker Male Female22012 13025 34 26 8Whole corpus size is 21.64 hours of speech data. About75% of whole dataset has been separated as training data and25% data as testing data. Among the 34 speakers, 20 male and5 female speakers(total 25) have been taken as o training dataspeakers and for testing data speakers rest of the speakers hasbeen selected (6 male speakers and 3 female speakers, total9).The default SHRUTI corpus was phonetic transcribed andwas also in English alphabets. A process needed to be donethat the system can give the output in the Bengali Language.Firstly, English phonetic transcribed sentences are convertedinto Bengali alphabets. Then the lexicons are needed to bemapped with their corresponding words. In figure 5, pre-processing of the whole data has been shown.A practical example has been shown in figure 6 of pre-processing of transcription of speech data.Preparation of data for Kaldi is a sequential work as follows.• Non-Silence Unique PhonesFig. 5. Bengali TranscriptionFig. 6. Practical Bengali Transcription• Building Lexicon• Corpus• Speaker to Utterance Mapping• Utterance to Speaker Mapping• Speaker to Gender Mapping• Defining Path for UtterancesA. Unique PhonesThere are 49 unique non-silence phones by which all thewords can be generated.B. LexiconAll the words have been constructed using the non-silencephones and silence phone which has been described in thelexicon file.C. CorpusThis file contains words that will be the output when thesystem needs to decode any utterances.D. Speaker to utterance and utterance to speaker mappingEvery utterance used in the corpus needs to be mapped withits corresponding speaker. The same process has been appliedin speaker to utterance also.E. Speaker to Gender MappingA file consists of the gender info of every speaker. ’m’ letterhas been used for denoting male speaker and for female ’f’has been used.Now the system and corpus configuration has been shownbelow.• Sampling Rate 16000 Hz• Feature Extraction MFCC• Vocabulary size 22012• Most of the utterances are different content based likesports, movies, political contents etc. Ngram-3 languagemodel has been used and was created by the labels fromthe train speeches using SRILM toolkit.• Every utterance has two or three Bengali sentences.VII. EXPERIMENT AND RESULTS ANALYSISIn this study, an</s>
|
<s>open resource(SHRUTI) has been selectedfor experiments. Improved feature extraction has been givenas the input of different types of acoustic models like DNN-HMM, GMM-HMM and sGMM. Word error rate(WER) met-ric has been used for measuring the system performance.Word Error Rate(WER) =S + I +D(6)Here (Equation 6) S is the number of substitutionsD is the number of deletionsI is the number of insertionsN is the Number of words in the reference speechThen a comparative result analysis of different models hasbeen shown in Figure 8. Previously, there is a speech recog-nition research work has been done on this corpus. Anothercomparison has been shown between this experiment and theprevious in Figure 9.The working procedure of a Speech recognition system hasbeen discussed in the Figure 7.Fig. 7. Speech Recognition SystemThese tasks mentioned below has been performed sequen-tially to get the expected performance from acoustic models.• Feature Extraction (MFCC)• Mono-phone Model Training• Audio alignment• Tri-phone Model training• Re-align audio and tri-phone• LDA-MLLT• LDA-MLLT-SAT(Speaker Adaptation Technique)GMM-HMM and DNN-HMM models have been usedwhich are developed in Kaldi toolkit in Linux platform.The sentences used in dataset are phonetically compact anddesigned to cover most of the frequent speaking word in theBengali language. A tri-gram language model has been usedfor getting better performance from the output of acousticmodels.A web application has been developed using Kaldi,GStreamer, and some python tools. 30 combinations of dif-ferent models has been tried like mono, tri1, tri2b, tri3b,tri3b mmi,tri3b fmmi etc. Some of the accuracy among thosemodels has been mentioned in this study.Fig. 8. WER for different types of ModelsNow the performances of different types of mod-els have been shown in Figure 8. Several models arecompared with each other and DNN-HMM-based modelcomes on the top. Among GMM-HMM models hybridtri3b fmmi model has shown the best performance. SubspaceGMM+LDA+MLLT+SAT based model has shown very highperformance. It has been mentioned that sGMM is veryeffective for small size corpus [23]. But in the long run for alarge corpus it will not be performing as good as it is showingin the Figure 8.Fig. 9. A comparison of different word error rates (WERs) on SHRUTIFigure 9 shows the comparison between the CMU-SPHINX(Tri-phone based GMM-HMM tri3b mllr) based models’WER, which has been achieved by IIT, Kharagpur researchgroup [13], and the Kaldi based models’ WERs that have beenachieved in this research work. Different types of GMM-HMMbesed models has been experimented in this work. But the bestperformance of GMM-HMM which is tri-phone based tri3b-fmmi model, has been selected for the comparison.It is noticeable that Kaldi’s tri-phone GMM-HMM basedmodel and DNN-HMM based model are much better perform-ing than CMU SPHINX tri-phone based GMM-HMM model.VIII. CONCLUSIONMain goal of this study was to benchmark the perfor-mance of recent approach to speech recognition on a spe-cific standard dataset of Bengali Language. DNN-HMM andGMM-HMM-based models have been used with Kaldis sev-eral feature extraction recipes and training approaches, like- MFCC with Mono-phone or Tri-phone or LDA+MLLTor LDA+MLLT+SAT, on a standard and publicly publishedBengali Language corpus (SHRUTI) for continuous speechrecognition. The performances have been achieved from theboth approaches are satisfactory and very close</s>
|
<s>indeed; theseare 0.92% WER for DNN-HMM and 2.02% for GMM-HMM-based acoustic modeling on the SHRUTI corpus of 21.64 hoursof speech data. It is desirable that more than 100 hours ofspeech corpus [4], the DNN-HMM approach will outperformthe performance of GMM-HMM, otherwise these results willbe marginally closer. So, the achieved performances haveapproved the fact and showed the requirement of large corpuson Bengali language for benchmarking the latest approach inspeech recognition.ACKNOWLEDGEMENTAuthors wish to acknowledge financial support from theHigher Education Quality Enhancement Project (AIF Window4, CP 3888) For The Development of Multi-Platform Speechand Language Processing Software for Bangla. Authors wouldlike to acknowledge the researchers of IIT, Kharagpur for theopen resource of speech data.REFERENCES[1] G. F. Simons and C. D. Fennig, “Bengali,” 2018. [Online]. Available:https://www.ethnologue.com/language/ben[2] D. Povey, A. Ghoshal, G. Boulianne, L. Burget, O. Glembek, N. Goel,M. Hannemann, P. Motlicek, Y. Qian, P. Schwarz et al., “The kaldispeech recognition toolkit,” in IEEE 2011 workshop on automatic speechrecognition and understanding, no. EPFL-CONF-192584. IEEE SignalProcessing Society, 2011.[3] M. Gales, S. Young et al., “The application of hidden markov modelsin speech recognition,” Foundations and Trends R© in Signal Processing,vol. 1, no. 3, pp. 195–304, 2008.[4] G. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-r. Mohamed, N. Jaitly,A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath et al., “Deep neuralnetworks for acoustic modeling in speech recognition: The shared viewsof four research groups,” IEEE Signal processing magazine, vol. 29,no. 6, pp. 82–97, 2012.[5] X. Zhang, J. Trmal, D. Povey, and S. Khudanpur, “Improving deepneural network acoustic models using generalized maxout networks,”in Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEEInternational Conference on. IEEE, 2014, pp. 215–219.[6] A.-r. Mohamed, G. E. Dahl, G. Hinton et al., “Acoustic modelingusing deep belief networks,” IEEE Trans. Audio, Speech & LanguageProcessing, vol. 20, no. 1, pp. 14–22, 2012.[7] A.-r. Mohamed, G. Dahl, and G. Hinton, “Deep belief networks forphone recognition,” in Nips workshop on deep learning for speechrecognition and related applications, vol. 1, no. 9. Vancouver, Canada,2009, p. 39.[8] T. N. Sainath, B. Ramabhadran, and M. Picheny, “An exploration oflarge vocabulary tools for small vocabulary phonetic recognition,” inAutomatic Speech Recognition & Understanding, 2009. ASRU 2009.IEEE Workshop on. IEEE, 2009, pp. 359–364.[9] S. H. Sumit, M. Tareq Al Muntasir, R. N. Nandi, and T. Sourov,“Noise robust end-to-end speech recognition for bangla language,” inInternational Conference on Bangla Speech and Language Processing(ICBSLP), vol. 21, 2018, p. 22.[10] J. R. Saurav, S. Amin, S. Kibria, and M. S. Rahman, “Bangla speechrecognition for voice search,” in International Conference on BanglaSpeech and Language Processing (ICBSLP), 2018.[11] T. Ahmed, M. F. Wahid, and M. A. Habib, “Implementation of banglaspeech recognition in voice input speech output (viso) calculator,” inInternational Conference on Bangla Speech and Language Processing(ICBSLP), 2018.[12] S. A. Sumon, J. Chowdhury, S. Debnath, N. Mohammed, and S. Momen,“Bangla short speech commands recognition using convolutional neuralnetworks,” in International Conference on Bangla Speech and LanguageProcessing (ICBSLP), 2018.[13] B. Das, S. Mandal, and P. Mitra, “Bengali speech corpus for contin-uous auutomatic speech recognition system,” in Speech Database andAssessments (Oriental COCOSDA), 2011 International Conference on.IEEE,</s>
|
<s>2011, pp. 51–55.[14] S. Sultana, M. Akhand, P. K. Das, and M. H. Rahman, “Banglaspeech-to-text conversion using sapi,” in Computer and CommunicationEngineering (ICCCE), 2012 International Conference on. IEEE, 2012,pp. 385–390.[15] M. Hasnat, J. Molwa, and M. Khan, “Isolated and continuous banglaspeech recognition: Implementation,” Performance and application per-spective, 2007.[16] M. Hossain, M. Rahman, U. K. Prodhan, M. Khan et al., “Implemen-tation of back-propagation neural network for isolated bangla speechrecognition,” arXiv preprint arXiv:1308.3785, 2013.[17] M. M. H. Nahid, M. A. Islam, and M. S. Islam, “A noble approachfor recognizing bangla real number automatically using cmu sphinx4,”in Informatics, Electronics and Vision (ICIEV), 2016 5th InternationalConference on. IEEE, 2016, pp. 844–849.[18] P. Upadhyaya, S. K. Mittal, O. Farooq, Y. V. Varshney, and M. R. Abidi,“Continuous hindi speech recognition using kaldi asr based on deepneural network,” in Machine Intelligence and Signal Analysis. Springer,2019, pp. 303–311.[19] B. Popović, S. Ostrogonac, E. Pakoci, N. Jakovljević, and V. Delić,“Deep neural network based continuous speech recognition for serbianusing the kaldi toolkit,” in International Conference on Speech andComputer. Springer, 2015, pp. 186–192.[20] N. Dave, “Feature extraction methods lpc, plp and mfcc in speechrecognition,” International journal for advance research in engineeringand technology, vol. 1, no. 6, pp. 1–4, 2013.[21] S. P. Rath, D. Povey, K. Veselỳ, and J. Cernockỳ, “Improved featureprocessing for deep neural networks.” in Interspeech, 2013, pp. 109–113.[22] L. R. Rabiner, “A tutorial on hidden markov models and selectedapplications in speech recognition,” Proceedings of the IEEE, vol. 77,no. 2, pp. 257–286, 1989.[23] D. Povey, L. Burget, M. Agarwal, P. Akyazi, F. Kai, A. Ghoshal,O. Glembek, N. Goel, M. Karafiát, A. Rastrow et al., “The subspacegaussian mixture modela structured model for speech recognition,”Computer Speech & Language, vol. 25, no. 2, pp. 404–439, 2011.</s>
|
<s>A Sequence-to-Sequence Pronunciation Model for Bangla Speech SynthesisInternational Conference on Bangla Speech and Language Processing(ICBSLP), 21-22 September, 2018 978-1-5386-8207-4/18/$31.00 ©2018 IEEE A Sequence-to-Sequence Pronunciation Model for Bangla Speech Synthesis Arif Ahmad Department of Computer Science and Engineering Shahjalal University of Science and Technology Sylhet, Bangladesh arif_ahmad-cse@sust.edu Muhammed Zafar Iqbal Department of Computer Science and Engineering Shahjalal University of Science and Technology Sylhet, Bangladesh mzi@sust.eduMohammed Raihan Hussain Department of Computer Science and Engineering Leading University Sylhet, Bangladesh rhzinuk@gmail.com Mohammad Shahidur Rahman Department of Computer Science and Engineering Shahjalal University of Science and Technology Sylhet, Bangladesh rahmanms@sust.edu Mohammad Reza Selim Department of Computer Science and Engineering Shahjalal University of Science and Technology Sylhet, Bangladesh selim@gmail.com Abstract— Extracting pronunciation from written text is necessary in many application areas, especially in text-to-speech synthesis. ‘Bangla’ is not completely a phonetic language, meaning there is not always direct mapping from orthography to pronunciation. It mainly suffers from ‘schwa deletion’ problem, along with some other ambiguous letters and conjuncts. Rule-based approaches cannot completely solve this problem. In this paper, we propose to adopt an Encoder-Decoder based neural machine translation (NMT) model for determining pronunciations of Bangla words. We mapped the pronunciation problem into a sequence-to-sequence problem and used two ‘Gated Recurrent Unit – Recurrent Neural Network’s (GRU-RNNs) for our model. We fed the model with two types of input data. In one model we used ‘raw’ words and in other model we used ‘pre-processed’ words (normalized by hand-written rules) as input. Both experiments showed promising results and can be used in any practical application. Keywords— encoder-decoder, sequence-to-sequence, gru-rnn, pronunciation model I. INTRODUCTION In any language, determining pronunciation from written text is not always a trivial task. If a language have a direct mapping between the orthographic representation and pronunciation, it is called a phonetic language. Bangla, a modern Indo-Aryan language, is not completely a phonetic language. Although most of the Bangla written texts can be pronounced directly without ambiguity, the problem arises with a phenomenon called ‘schwa deletion’. ‘Schwa’ is an implicit vowel phoneme (‘অ’ /ɔ/ for Bangla) associated with every consonant of a language. In Bangla, the ‘schwa’ phoneme is sometimes kept as is, sometimes deleted, and sometimes replaced with ‘ও’ /o/ vowel. Besides, some letters and conjuncts have ambiguous pronunciation too. A phonemic representation is the pronounceable form of the written text. Correct phonemic representation is required in many applications, such as text-to-speech synthesis and speech recognition. The oldest approaches of developing ‘pronunciation model’ were usually ‘rule-based’. ‘Hand- written’ rules were created with the help of phonetic experts to derive the pronunciation of words. The second approach is ‘data-driven’ approach, where rules were automatically learned from data. Also pronunciation lexicons are sometimes used along with rules. The latest approach of designing pronunciation models is ‘statistical’ approach. This approach is also data-driven, but these techniques not only learn from data, but use statistical techniques to do so. Works on solving ‘Bangla’ pronunciation problems are very rare. So we have looked at the literatures of other similar</s>
|
<s>languages, such as Hindi. Reference [1] used a rule-based approach with additional morphological analysis. They particularly focused on ‘schwa deletion’ problem. An overview of different pronunciation models were presented in [2] where the author concluded that more research is required in this area. More advanced, statistical experiments are used in recent years, such as [3] and [4]. S. A. Chowdhury et. al. [5] proposed a machine learning based pronunciation model using Conditional Random Fields (CRFs) algorithm and obtained about 85% overall accuracy. It is obvious from the above-mentioned works that the older rule-based approaches cannot solve the pronunciation problem completely. Fortunately, the huge improvements in computing power in recent years gives us opportunity to apply ‘machine learning’ based methods in problems that couldn’t be solved by rules. Various ‘deep learning’ models enable us to utilize the immense amount of data and processing power we have in our disposal. In this paper, we have proposed sequence-to-sequence based deep learning model to solve the Bangla pronunciation problem. Although some machine learning techniques had been used in recent years [4] [5], our work is novel in two aspects. First, our model generates the ‘phonemic form’ of a given word whereas others’ generate a list of phonemes. We preferred the phonemic forms over the phoneme lists, because it can be parsed in various ways, such as phones, di-phones, tri-phones, syllables, etc. Second, we have used a sufficiently large amount of data (about 80,000 words in total) to train our deep learning model. A data-set of this magnitude was never used in previous works. We gathered Bangla lexicons from different sources (described in section III) and manually removed the inconsistencies that were present in the data. We plan to release our lexicon publicly, so that people can use it to evaluate any related work. We have implemented an Encoder-Decoder based [6] Neural Machine Translation (NMT) model to convert Bangla words into their pronounceable forms. We have adopted this NMT model by mapping our problem into a sequence-to-sequence ‘deep learning’ problem. We describe our work in sub-sequent sections. Section II defines the problem elaborately and discusses the necessity of developing a sequence-to-sequence model for pronunciation task. Section III discusses the process of data preparation. In section IV, we describe the architecture of our model in detail. Section V illustrates the experimental process and results. Finally, section VI concludes the discussion by summarizing the experiments and pointing out future research direction. II. REVIEW OF BANGLA PRONUCIATOIN PROBLEMS As mentioned in the section I, Bangla is not completely a phone language. It struggles mainly in pronouncing the ‘schwa’ phoneme, i.e. the implicit ‘অ’ /ɔ/ sound associated with each consonant. Some examples of ‘schwa deletion’ are: ‘বলব’ /b o l b o/ (‘অ’ /ɔ/ deleted from the middle-letter of the word), ‘নগর’ /n ɔ g o r/ (‘অ’ /ɔ/ deleted from last letter of the word), ‘শহর’ /ʃ ɔ h o r/ (‘অ’ /ɔ/ is replaced with ‘ও’ /o/ in second letter). A rule-based approach cannot solve the issue,</s>
|
<s>because there is no grammatical rule for pronunciation in Bangla. Another ambiguous situation arises with the pronunciation of ‘স’. It has two pronunciations: /ʃ/ and /s/. It is observed that, most of the ‘proper’ Bangla words pronounce ‘স’ as /ʃ/. On the other hand, most of the foreign words adapted to Bangla pronounce ‘স’ as /s/. But exceptions exist in both the cases. The vowel phoneme ‘এ’ /e/ is sometimes replaced with ‘এ া’ /æ/ phoneme. An example is দখা /d æ kʰ a/ (‘এ’ /e/ becomes ‘এ া’ /æ/). Besides these, some conjuncts also have ambiguous pronunciations. III. DATA PREPARATION A sufficiently large lexicon was needed to train the pronunciation model. We obtained our lexicon from several sources. Google released a public Bangla lexicon [7] of ~60,000 words. They used this lexicon to develop their Bangla speech synthesizer [8]. They annotated the pronunciations in ToBI notation. We have adopted Google’s lexicon, but changed the pronunciations into Bangla orthographic form. We have collected another lexicon from Bangla Academy Dictionary, which contains ~40,000 words. Upon gathering all the lexicons, we compiled a lexicon consisting of ~80,000 words. We split the data in 80-10-10 ratio (80% data for training, 10% data for validation, 10% data for testing). We have also compiled a lexicon of ‘normalized’ form of those words. By ‘normalized’ we mean the removal/replacement of unnecessary letters from the words. For example, the character ‘শ’ and ‘ষ’ have the same pronunciation /ʃ /. So we replaced all ‘ষ’ with ‘শ’. The replacement rules are discussed in [9], [10] and [11]. The list of all replacements we used is given in Table I. TABLE I. REPLACE RULES FOR NORMALIZATION Letter Replaced with Example ঈ ই ঈগল=ইগল ঈ-কার ই-কার রাণী=রািণ ঊ উ ঊহ =উহ ঊ-কার উ-কার শূন = ন ঋ ির ঋতু=িরতু ঐ ওই ঐক =ওইক ঐ-কার ও-কার + ই ক= কাই ঔ ওউ ঔষধ=ওউশধ ঔ-কার ও-কার + উ কৗশল=েকাউশল ৎ মহৎ=মহ ঙ ◌ং গাঙ=গাং ণ ন হিরণ=হিরন য জ যখন=জখন ষ শ মিহষ=মিহশ IV. MODEL ARCHITECTURE A. Building the Encoder-Decoder Model Our Sequence-to-Sequence model uses the Encoder-Decoder Architecture. We have adopted a neural machine translation model [12] to perform the pronunciation conversion. Fig. 1 shows a simplified flow diagram of our model. We have conducted our experiment on two types of inputs: ‘raw’ words and ‘normalized’ words. We will compare the outcomes of the two experiments in the next section. The model is split into two parts: An encoder which maps the source-text to a "thought vector" that summarizes the text's contents, which is then input to the second part of the neural network that decodes the "thought vector" to the destination-text. The neural network cannot work directly on texts. So first we need to split the input word into a sequence of letters and convert each letter to an integer-token using a tokenizer. But the neural network cannot work on integers either, so we use a so-called Embedding Layer to convert each integer-token to a vector of floating-point values. The embedding</s>
|
<s>is trained alongside the rest of the neural network to map letters with similar semantic meaning to similar vectors of floating-point values. The full architecture of our model is illustrated in Fig. 2. B. Work-flow of the Model Let’s consider a Bangla word "মানুষ" /m a n u ʃ/ which should be converted to its phonemic form “মানু ”. We first convert the entire vocabulary of our data-set to integer- tokens so the text "মা নু ষ" becomes [12, 54, 197]. Each of these integer-tokens is then mapped to an embedding-vector with e.g. 128 elements, so the integer-token 12 could for example become [0.12, -0.56, ..., 1.19] and the integer-token 54 could for example become [0.39, 0.09, ..., -0.12], and so on. These embedding-vectors can then be input to the Recurrent Neural Network, which has 3 GRU-layers. The last GRU-layer outputs a single vector – the ‘thought vector’ that summarizes the contents of the source word, which is then used as the initial state of the GRU-units in the decoder-part. The destination-text "মা নু " is padded with special markers "ssss" and "eeee" to indicate its beginning and end, so the sequence of integer-tokens becomes [2, 12, 54, 79, 3]. During training, the decoder will be given this entire sequence as input and the desired output sequence is [12, 54, 79, 3] which is the same sequence but time-shifted one step. We are trying to teach the decoder to map the "thought vector" and the start-token "ssss" (integer 2) to the next letter "মা" (integer 12), and then map the letter "মা" to the letter "নু" (integer 54), and so forth. C. Hyperparameter Tuning We started our experiment with initial Hyperparameter settings. After adjusting the parameters for a few iterations of the model, and studying the observations of [5], we tuned the parameters as follows. Embedding layer consists of a 128 dimension vector. Both the encoder and decoder have three GRU layers. We used GRU cells over LSTM cells because GRU doesn’t need a memory unit, is simpler, easier to modify and trains faster. Also, GRUs perform better on relatively smaller and medium sized data set when doing language modelling. Both the encoder and decoder have 3 layers of GRU cells. We ran 20 epochs on every iteration of model training. Increasing the Fig. 1: Flow diagram of the encoder-decoder model Fig. 2: Architecture of sequence-to-sequence model epochs more than this did not perform any better. The hyperparameters used in our model are summarized in Table II. TABLE II. HYPERPARAMETERS USED IN THE MODEL Hyperparameter Value Embedding Dimension 128 State Size 512 Batch Size 500 Number of Epochs 20 RNN Cell Variant GRU Encoder Depth 3 Decoder Depth 3 Encoder Bidirectional Decoder Bidirectional V. RESULTS & PERFORMANCE ANALYSIS Our model is different from other pronunciation models, in terms of output. Usually a pronunciation model takes ‘word’ as input and generates ‘phonemes’ as output. But in our model, we generate the ‘phonemic representation’ of the input words, instead of phonemes. We</s>
|
<s>choose this because phonemic representation has a wide range of application areas. For example, in a text-to-speech synthesis system, the phonemic representation can be used to parse the text in various forms of units e.g. phones, di-phones, syllables, etc. Some examples of our pronunciation model is given in table III. TABLE III. OUTPUT OF THE PRONUNCIATION MODEL Input word Normalized word Output word মিহষ মিহশ মািহ যাযাবর জাজাবর জাজাব কৃিষঋণ কৃিশিরন কৃিশির ঐকমত ওইকমত ওই্ েকাম েতা সদাচরণ সদাচরন শদাচেরা As mentioned earlier, we conducted our experiment on two types of input words. At first, we trained our model with ‘raw’ words and got the accuracy of 87.34% on test sets. Then we normalized the input words by some hand-written rules and trained the model with these normalized words. Here we got an accuracy of 89.57%. In training with normalized words, the training-set accuracy was increased significantly due to overfitting. This problem could be solved by increasing the size of data set. Since we could not get more data, we used ‘regularization’ to handle this situation. Also, we analysed the erroneous outputs manually and found some patterns where errors occurred frequently which can be solved by Hyperparameter tuning. Table IV summarizes the performance of our experiments. TABLE IV. PERFORMANCE OF PRONUNCIATION MODEL Type of Input Training Set Accuracy Test Set Accuracy Raw Words 90.82% 87.34% Normalized Words 96.37% 89.57% VI. CONCLUSION In this paper, we implemented a deep neural network based pronunciation model for Bangla language. We successfully adopted an NMT model to map our problem. Although the accuracy of the model has not achieved perfection yet, it can be used in any real application. For example, in text-to-speech (TTS) systems, users actually hear continues speech and are not bothered by erroneous pronunciation of just a few words. We tested our model in an existing TTS system ‘Subachan’ and our listeners did not complain about the pronunciation. Having mentioned that, there remains some scope to improve the performance of the model. Our error analysis indicates that, further preprocessing and hyperparameter tuning can result in slightly better performance. Also, adding an ‘attention’ layer in our model would significantly improve the accuracy of the model. REFERENCES [1] B. Narasimhan, R. Sproat, and G. Kiraz, “Schwa-deletion in Hindi text-to-speech synthesis” in International Journal of Speech Technology (2004) 7: 319. [2] T. Svendsen, "Pronunciation modeling for speech technology," in 2004 Intl. Conference on Signal Processing and Communications, 2004. SPCOM '04., Bangalore, India, 2004, pp. 11-16. [3] P. Taylor, "Hidden Markov models for grapheme to phoneme conversion", in INTERSPEECH-2005, 1973-1976.. [4] K. Prahallad, A. W. Black and R. Mosur, "Sub-Phonetic Modeling For Capturing Pronunciation Variations For Conversational Speech Synthesis," 2006 IEEE International Conference on Acoustics Speech and Signal Processing Proceedings, Toulouse, 2006, pp. I-I. [5] S. A. Chowdhury, F. Alam, N. Khan and S. R. H. Noori, "Bangla grapheme to phoneme conversion using conditional random fields," 2017 20th International Conference of Computer and Information Technology (ICCIT), Dhaka, 2017, pp. 1-6. [6] D. Britz, A. Goldie,</s>
|
<s>M.T. Luong, and Q. Le, “Massive exploration of neural machine translation architectures” in arXiv preprint arXiv:1703.03906, 2017. [7] Online: https://github.com/googlei18n/language-resources/blob/master/bn/data/lexicon.tsv, accessed on August 12, 2018. [8] A. Gutkin, L. Ha, M. Jansche, K. Pipatsrisawat, and R. Sproat, “TTS for low resource languages: A bangla synthesizer.'' in LREC, 2016. [9] Alam, Firoj, S. M. Murtoza Habib and Mumit Khan. “Text Normalization System for Bangla.” (2009). [10] Rashid, M., Hussain, M. and Rahman, M. (2010). Text Normalization and Diphone Preparation for Bangla Speech Synthesis. Journal of Multimedia, 5(6). [11] A. Naser, D. Aich and M. R. Amin, "Implementation of Subachan: Bengali text to Speech Synthesis Software," International Conference on Electrical & Computer Engineering (ICECE 2010), Dhaka, 2010, pp. 574-577. [12] Online: https://github.com/Hvass-Labs/TensorFlow-Tutorials/blob/master/21_Machine_Translation.ipynb, accessed on August 12, 2018. /ASCII85EncodePages false /AllowTransparency false /AutoPositionEPSFiles false /AutoRotatePages /None /Binding /Left /CalGrayProfile (Gray Gamma 2.2) /CalRGBProfile (sRGB IEC61966-2.1) /CalCMYKProfile (U.S. Web Coated \050SWOP\051 v2) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Warning /CompatibilityLevel 1.4 /CompressObjects /Off /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages true /CreateJobTicket false /DefaultRenderingIntent /Default /DetectBlends true /DetectCurves 0.0000 /ColorConversionStrategy /LeaveColorUnchanged /DoThumbnails false /EmbedAllFonts true /EmbedOpenType false /ParseICCProfilesInComments true /EmbedJobOptions true /DSCReportingLevel 0 /EmitDSCWarnings false /EndPage -1 /ImageMemory 1048576 /LockDistillerParams true /MaxSubsetPct 100 /Optimize false /OPM 0 /ParseDSCComments false /ParseDSCCommentsForDocInfo false /PreserveCopyPage true /PreserveDICMYKValues true /PreserveEPSInfo false /PreserveFlatness true /PreserveHalftoneInfo true /PreserveOPIComments false /PreserveOverprintSettings true /StartPage 1 /SubsetFonts false /TransferFunctionInfo /Remove /UCRandBGInfo /Preserve /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true /Arial-Black /Arial-BoldItalicMT /Arial-BoldMT /Arial-ItalicMT /ArialMT /ArialNarrow /ArialNarrow-Bold /ArialNarrow-BoldItalic /ArialNarrow-Italic /ArialUnicodeMS /BookAntiqua /BookAntiqua-Bold /BookAntiqua-BoldItalic /BookAntiqua-Italic /BookmanOldStyle /BookmanOldStyle-Bold /BookmanOldStyle-BoldItalic /BookmanOldStyle-Italic /BookshelfSymbolSeven /Century /CenturyGothic /CenturyGothic-Bold /CenturyGothic-BoldItalic /CenturyGothic-Italic /CenturySchoolbook /CenturySchoolbook-Bold /CenturySchoolbook-BoldItalic /CenturySchoolbook-Italic /ComicSansMS /ComicSansMS-Bold /CourierNewPS-BoldItalicMT /CourierNewPS-BoldMT /CourierNewPS-ItalicMT /CourierNewPSMT /EstrangeloEdessa /FranklinGothic-Medium /FranklinGothic-MediumItalic /Garamond /Garamond-Bold /Garamond-Italic /Gautami /Georgia /Georgia-Bold /Georgia-BoldItalic /Georgia-Italic /Haettenschweiler /Impact /Kartika /Latha /LetterGothicMT /LetterGothicMT-Bold /LetterGothicMT-BoldOblique /LetterGothicMT-Oblique /LucidaConsole /LucidaSans /LucidaSans-Demi /LucidaSans-DemiItalic /LucidaSans-Italic /LucidaSansUnicode /Mangal-Regular /MicrosoftSansSerif /MonotypeCorsiva /MSReferenceSansSerif /MSReferenceSpecialty /MVBoli /PalatinoLinotype-Bold /PalatinoLinotype-BoldItalic /PalatinoLinotype-Italic /PalatinoLinotype-Roman /Raavi /Shruti /Sylfaen /SymbolMT /Tahoma /Tahoma-Bold /TimesNewRomanMT-ExtraBold /TimesNewRomanPS-BoldItalicMT /TimesNewRomanPS-BoldMT /TimesNewRomanPS-ItalicMT /TimesNewRomanPSMT /Trebuchet-BoldItalic /TrebuchetMS /TrebuchetMS-Bold /TrebuchetMS-Italic /Tunga-Regular /Verdana /Verdana-Bold /Verdana-BoldItalic /Verdana-Italic /Vrinda /Webdings /Wingdings2 /Wingdings3 /Wingdings-Regular /ZWAdobeF /NeverEmbed [ true /AntiAliasColorImages false /CropColorImages true /ColorImageMinResolution 200 /ColorImageMinResolutionPolicy /OK /DownsampleColorImages true /ColorImageDownsampleType /Bicubic /ColorImageResolution 300 /ColorImageDepth -1 /ColorImageMinDownsampleDepth 1 /ColorImageDownsampleThreshold 1.50000 /EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages false /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /ColorImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 200 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 1.50000 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages false /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /GrayImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 400 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 600 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.50000 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1</s>
|
<s>/AllowPSXObjects false /CheckCompliance [ /None /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXOutputIntentProfile (None) /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False /CreateJDFFile false /Description << /CHS <FEFF4f7f75288fd94e9b8bbe5b9a521b5efa7684002000410064006f006200650020005000440046002065876863900275284e8e55464e1a65876863768467e5770b548c62535370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c676562535f00521b5efa768400200050004400460020658768633002> /CHT <FEFF4f7f752890194e9b8a2d7f6e5efa7acb7684002000410064006f006200650020005000440046002065874ef69069752865bc666e901a554652d965874ef6768467e5770b548c52175370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c4f86958b555f5df25efa7acb76840020005000440046002065874ef63002> /DAN <FEFF004200720075006700200069006e0064007300740069006c006c0069006e006700650072006e0065002000740069006c0020006100740020006f007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400650072002c0020006400650072002000650067006e006500720020007300690067002000740069006c00200064006500740061006c006a006500720065007400200073006b00e60072006d007600690073006e0069006e00670020006f00670020007500640073006b007200690076006e0069006e006700200061006600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020004400650020006f007000720065007400740065006400650020005000440046002d0064006f006b0075006d0065006e0074006500720020006b0061006e002000e50062006e00650073002000690020004100630072006f00620061007400200065006c006c006500720020004100630072006f006200610074002000520065006100640065007200200035002e00300020006f00670020006e0079006500720065002e> /DEU <FEFF00560065007200770065006e00640065006e0020005300690065002000640069006500730065002000450069006e007300740065006c006c0075006e00670065006e0020007a0075006d002000450072007300740065006c006c0065006e00200076006f006e002000410064006f006200650020005000440046002d0044006f006b0075006d0065006e00740065006e002c00200075006d002000650069006e00650020007a0075007600650072006c00e40073007300690067006500200041006e007a006500690067006500200075006e00640020004100750073006700610062006500200076006f006e00200047006500730063006800e40066007400730064006f006b0075006d0065006e00740065006e0020007a0075002000650072007a00690065006c0065006e002e00200044006900650020005000440046002d0044006f006b0075006d0065006e007400650020006b00f6006e006e0065006e0020006d006900740020004100630072006f00620061007400200075006e0064002000520065006100640065007200200035002e003000200075006e00640020006800f600680065007200200067006500f600660066006e00650074002000770065007200640065006e002e> /ESP <FEFF005500740069006c0069006300650020006500730074006100200063006f006e0066006900670075007200610063006900f3006e0020007000610072006100200063007200650061007200200064006f00630075006d0065006e0074006f0073002000640065002000410064006f00620065002000500044004600200061006400650063007500610064006f007300200070006100720061002000760069007300750061006c0069007a00610063006900f3006e0020006500200069006d0070007200650073006900f3006e00200064006500200063006f006e006600690061006e007a006100200064006500200064006f00630075006d0065006e0074006f007300200063006f006d00650072006300690061006c00650073002e002000530065002000700075006500640065006e00200061006200720069007200200064006f00630075006d0065006e0074006f00730020005000440046002000630072006500610064006f007300200063006f006e0020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e003000200079002000760065007200730069006f006e0065007300200070006f00730074006500720069006f007200650073002e> /FRA <FEFF005500740069006c006900730065007a00200063006500730020006f007000740069006f006e00730020006100660069006e00200064006500200063007200e900650072002000640065007300200064006f00630075006d0065006e00740073002000410064006f006200650020005000440046002000700072006f00660065007300730069006f006e006e0065006c007300200066006900610062006c0065007300200070006f007500720020006c0061002000760069007300750061006c00690073006100740069006f006e0020006500740020006c00270069006d007000720065007300730069006f006e002e0020004c0065007300200064006f00630075006d0065006e00740073002000500044004600200063007200e900e90073002000700065007500760065006e0074002000ea0074007200650020006f007500760065007200740073002000640061006e00730020004100630072006f006200610074002c002000610069006e00730069002000710075002700410064006f00620065002000520065006100640065007200200035002e0030002000650074002000760065007200730069006f006e007300200075006c007400e90072006900650075007200650073002e> /ITA (Utilizzare queste impostazioni per creare documenti Adobe PDF adatti per visualizzare e stampare documenti aziendali in modo affidabile. I documenti PDF creati possono essere aperti con Acrobat e Adobe Reader 5.0 e versioni successive.) /JPN <FEFF30d330b830cd30b9658766f8306e8868793a304a3088307353705237306b90693057305f002000410064006f0062006500200050004400460020658766f8306e4f5c6210306b4f7f75283057307e305930023053306e8a2d5b9a30674f5c62103055308c305f0020005000440046002030d530a130a430eb306f3001004100630072006f0062006100740020304a30883073002000410064006f00620065002000520065006100640065007200200035002e003000204ee5964d3067958b304f30533068304c3067304d307e305930023053306e8a2d5b9a3067306f30d530a930f330c8306e57cb30818fbc307f3092884c3044307e30593002> /KOR <FEFFc7740020c124c815c7440020c0acc6a9d558c5ec0020be44c988b2c8c2a40020bb38c11cb97c0020c548c815c801c73cb85c0020bcf4ace00020c778c1c4d558b2940020b3700020ac00c7a50020c801d569d55c002000410064006f0062006500200050004400460020bb38c11cb97c0020c791c131d569b2c8b2e4002e0020c774b807ac8c0020c791c131b41c00200050004400460020bb38c11cb2940020004100630072006f0062006100740020bc0f002000410064006f00620065002000520065006100640065007200200035002e00300020c774c0c1c5d0c11c0020c5f40020c2180020c788c2b5b2c8b2e4002e> /NLD (Gebruik deze instellingen om Adobe PDF-documenten te maken waarmee zakelijke documenten betrouwbaar kunnen worden weergegeven en afgedrukt. De gemaakte PDF-documenten kunnen worden geopend met Acrobat en Adobe Reader 5.0 en hoger.) /NOR <FEFF004200720075006b00200064006900730073006500200069006e006e007300740069006c006c0069006e00670065006e0065002000740069006c002000e50020006f0070007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e00740065007200200073006f006d002000650072002000650067006e0065007400200066006f00720020007000e5006c006900740065006c006900670020007600690073006e0069006e00670020006f00670020007500740073006b007200690066007400200061007600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020005000440046002d0064006f006b0075006d0065006e00740065006e00650020006b0061006e002000e50070006e00650073002000690020004100630072006f00620061007400200065006c006c00650072002000410064006f00620065002000520065006100640065007200200035002e003000200065006c006c00650072002e> /PTB <FEFF005500740069006c0069007a006500200065007300730061007300200063006f006e00660069006700750072006100e700f50065007300200064006500200066006f0072006d00610020006100200063007200690061007200200064006f00630075006d0065006e0074006f0073002000410064006f00620065002000500044004600200061006400650071007500610064006f00730020007000610072006100200061002000760069007300750061006c0069007a006100e700e3006f002000650020006100200069006d0070007200650073007300e3006f00200063006f006e0066006900e1007600650069007300200064006500200064006f00630075006d0065006e0074006f007300200063006f006d0065007200630069006100690073002e0020004f007300200064006f00630075006d0065006e0074006f00730020005000440046002000630072006900610064006f007300200070006f00640065006d0020007300650072002000610062006500720074006f007300200063006f006d0020006f0020004100630072006f006200610074002000650020006f002000410064006f00620065002000520065006100640065007200200035002e0030002000650020007600650072007300f50065007300200070006f00730074006500720069006f007200650073002e> /SUO <FEFF004b00e40079007400e40020006e00e40069007400e4002000610073006500740075006b007300690061002c0020006b0075006e0020006c0075006f0074002000410064006f0062006500200050004400460020002d0064006f006b0075006d0065006e007400740065006a0061002c0020006a006f0074006b006100200073006f0070006900760061007400200079007200690074007900730061007300690061006b00690072006a006f006a0065006e0020006c0075006f00740065007400740061007600610061006e0020006e00e400790074007400e4006d0069007300650065006e0020006a0061002000740075006c006f007300740061006d0069007300650065006e002e0020004c0075006f0064007500740020005000440046002d0064006f006b0075006d0065006e00740069007400200076006f0069006400610061006e0020006100760061007400610020004100630072006f0062006100740069006c006c00610020006a0061002000410064006f00620065002000520065006100640065007200200035002e0030003a006c006c00610020006a006100200075007500640065006d006d0069006c006c0061002e> /SVE <FEFF0041006e007600e4006e00640020006400650020006800e4007200200069006e0073007400e4006c006c006e0069006e006700610072006e00610020006f006d002000640075002000760069006c006c00200073006b006100700061002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400200073006f006d00200070006100730073006100720020006600f60072002000740069006c006c006600f60072006c00690074006c006900670020007600690073006e0069006e00670020006f006300680020007500740073006b007200690066007400650072002000610076002000610066006600e4007200730064006f006b0075006d0065006e0074002e002000200053006b006100700061006400650020005000440046002d0064006f006b0075006d0065006e00740020006b0061006e002000f600700070006e00610073002000690020004100630072006f0062006100740020006f00630068002000410064006f00620065002000520065006100640065007200200035002e00300020006f00630068002000730065006e006100720065002e> /ENU (Use these settings to create PDFs that match the "Required" settings for PDF Specification 4.01)>> setdistillerparams /HWResolution [600 600] /PageSize [612.000 792.000]>> setpagedevice</s>
|
<s>Proceedings of the...D S Sharma, R Sangal and A K Singh. Proc. of the 13th Intl. Conference on Natural Language Processing, pages 65–70,Varanasi, India. December 2016. c©2016 NLP Association of India (NLPAI)Syntax and Pragmatics of Conversation: A Case of Bangla Samir Karmakar School Of Languages and Linguistics Jadavpur University India samirkrmkr@yahoo.co.in Soumya Sankar Ghosh School Of Languages and Linguistics Jadavpur University India ghosh.soumya73@yahoo.com Abstract Conversation is often considered as the most problematic area in the field of formal lin-guistics, primarily because of its dynamic emerging nature. The degree of complexity is also high in comparison to traditional sen-tential analysis. The challenge for develop-ing a formal account for conversational anal-ysis is bipartite: Since the smallest structural unit at the level of conversational analysis is utterance, existing theoretical framework has to be developed in such a manner so that it can take an account of the utterance. In addi-tion to this, a system should be developed to explain the interconnections of the utterances in a conversation. This paper tries to address these two tasks within the transformational and generative framework of Minimalism, proposed by Chomsky, with an emphasis on the Bengali particle to – traditionally classi-fied as indeclinable. 1 Introduction Formal modeling of conversation is still consid-ered as daunting task in the fields of both compu-tational and cognitive linguistics. In spite of the emphasis by Chomsky (1986) on the questions of (a) what constitutes the knowledge of language, (b) how is knowledge of language acquired, and (c) how is knowledge of language put to use, a few approaches has really dealt with the last questions of the above mentioned series. Though formal theories are proposed to deal with the very nature of knowledge of language in linguis-tics, very less has been done to understand how this knowledge is put to use within the general framework of Transformational and Generative Grammar (henceforth, T-G Grammar). Under this situation, the paper seeks to investigate how knowledge of language is put to use. More spe-cifically, the paper intends to explore how effi-ciently the semantics and pragmatics of conver-sation can be explained within the existing theo-retical framework of T-G Grammar. Consider the following example: 1. Speaker_1 suśīl-ɸ ās-ɸ-b-e to Sushil-Nom come-ɸ-fut-3.fut prt Will Sushil come? Speaker_2 ā suśīl-ɸ ās-ɸ-b-e yes Sushil-Nom come-ɸ-fut-3.fut Yes, Sushil will come. In this piece of communication, Speaker_1 asks a question about the arrival of Sushil. In response to Speaker_1’s query Speaker_2 confirms Sushil’s arrival. The current status of linguistic enquiry in the field of syntax and semantics does not deal with this type of connected speech which we encounter often in our daily life. In most of the cases, idealized sentential representa-tion is discussed to unveil the grammatical intri-cacies. Interestingly, what falls outside of the scope of these sorts of investigation is a system-atic exploration into what we would call the grammar of conversation. The importance of studying the grammar of conversation also lies with the fact that conversation embodies many principles of complex dynamic system. Under this situation, this paper attempts</s>
|
<s>to address those problems involved in the formal modeling of the conversational discourse with in framework of Minimalist Program (Chomsky, 1995) with a specific emphasis on the behavior to Bengali par-ticle to – traditionally classified as indeclinable. Unlike the major lexical expressions, to as a dis-course particle hardly contributes in the content of the sentence; rather, it is used to induce some effect of emotional coloring on the content itself. By emotional coloring we do mean various states of minds involved in the act of questioning, doubting, confirming, requesting etc. From the perspective of conversation analysis, the expres-sions like to are extremely crucial primarily be-cause of their role in ongoing epistemic negotia-tions happening between the interlocutors, i.e. the negotiation holding between Speaker_1 and Speaker_2. In virtue of contributing in the epis-temic negotiation in terms of various emotional colors as is mentioned above, it expects other sentential discourses. As a consequence, it be-comes quite essential to investigate how this ca-pacity of meaning making can be talked about in terms of the pragmatic, semantic and syntactic behaviors of to. To attain the above stated goal, the paper will explore the sentential level semantics and prag-matics of to in Bengali in Section 2. In Section 3, this discussion will be further augmented with a discussion of some pragmatic observations re-garding the linguistic behavior of to to elucidate how current understanding of Pragmatics can provide some important clues about the formali-zation of the problem stated above. Finally in Section 4, we have proposed a theoretical framework which is crucial in providing a sys-temic formal account of conversation. 2 Indeclinable to in Bengali Traditionally, to is classified as indeclinable for the reason of not being affected by the inflec-tions. It is not being expected by the major lexi-cal categories of a sentence. Its significance lies with its capacity to change the overall sense of a sentence. In addition to this it is also noticed that the incorporation of to has its direct bearing on the pitch contour of the sentence itself. Compare the sentences cited in (2) and (3): 2. suśīl-ɸ kāl-ɸ bājār-e Sushil-Nom yesterday-Loctemp market-Locspatial giy-ech-il-o go-perf-past-3.past Sushil had gone to the market yesterday 3. suśīl-ɸ to kāl-ɸ bājār-e Sushil-Nom prt yesterday-Loctemp market-Locspatial giy-ech-il-o go-perf-past-3.past Sushil had gone to the market yesterday As per the traditional practice, articulations of declarative sentences seem to be the objective rendition of the real world phenomena. For the interpretation of a declarative sentence like (2), one has no need to invoke the knowledge of pre-ceding and following sentences, as if (2) is self-sufficient. In contrast, the sentences like the one cited in (3) is considered as unreal in virtue of being stated in a mood other than declarative. What distinguishes (3) as unreal is the presence of to in it. Incorporation of to in (2) results into an articulation stated in irrealis mood. Irrealis mood covers a wide range of emotional in-volvements like questioning, affirming etc. In other words, (3) is not an objective rendition about any worldly</s>
|
<s>phenomena but involves a wide range of subjective necessities to satiate its meaning construing capacity. At least in case of to, it is also possible to show that change of its position in (3) confirms different types of re-quirements raised by the context of communica-tion within which the sentence is embedded in. Consider the following sentences: 4. suśīl-ɸ kāl-ɸ to Sushil-Nom yesterday-Loctemp prt bājār-e giy-ech-il-o market-Locspatial go-perf-past-3.past Sushil had gone to the market yesterday 5. suśīl-ɸ kāl-ɸ bājār-e Sushil-Nom yesterday-Loctemp market-Locspatial to giy-ech-il-o prt go-perf-past-3.past Sushil had gone to the market yesterday 6. suśīl-ɸ kāl-ɸ bājār-e Sushil-Nom yesterday-Loctemp market-Locspatial giy-ech-il-o to go-perf-past-3.past prt Sushil had gone to the market yesterday The other point which needs to be brought into the notice is the capacity of to of putting empha-sis on the different constituents of a sentence. To represent emphasis, bold letters are used. Change in position changes the pattern of emphasizing while keeping the emotional content intact. Change in emotional content can only be in-itiated by ensuring the change in the pitch con-tour: From Fig. 1, it is visible that in case of af-firming the stress is put on the syllables quite differently than it is put in case of questioning. Moreover the point we want to make here is that emotional conditioning has the power to super-sede the lexical conditioning while emphasizing the communicative intention. Fig. 1. suśil kāl bājāre gi ec ilo to It is not hard to show that the ambiguity in the emotional content is always relative to pitch con-tour carried by an utterance. For example the ambiguity between affirming and question-ing/doubting in case of (7) can be solved by tak-ing an account of the associated pitch contour. 7. suśīl-ɸ ās-ɸ-b-e to Sushil-Nom come-ɸ-fut-3.fut prt Will Sushil come? However, when the sense of request is prevail-ing, no such ambiguity in terms of emotional content is noticed: 8. ekbār es-ɸ-ɸ-o to once come-ɸ-pres-2.pres prt Come once. Beside this, to can also appear with tāi and hay. The resultant forms tāito and hay-to, can mean several things depending on the context: 9. tāi to bal-ch-ɸ-i because prt tell-cont-pres-1pres That is why, I am telling (this). 10 ha-ɸ-ɸ-y to tā-i bal-ech-ɸ-i be-ɸ-ɸ-3pres.Imp prt that-emph tell-perf-pres-1pres Probably, I have said so. Though tāito and hayto are composed of two different morphemes, they are often treated as single forms. Because of the anaphoric nature of tāi, tāito establishes a relation between the cur-rent articulation and the previous articulations. In a conditional construction like (11), inclusion of to as in (12) brings different shades of inter-pretation which is equivalent to ādau jadi balte dāo “if at all you allow me to speak”. 11. bal-te di-le bal-ɸ-ɸ-i tell-prt give-prt tell-ɸ-pres-1.pres If you allow then only I speak. 12. bal-te di-le to bal-ɸ-ɸ-i tell-prt give-prt prt tell-ɸ-pres-1.pres If you at all allow me to speak. to can also be used in a negative sense: 13 bal-te di-le to tell-prt give-prt prt We are not allowed to speak When to is used in conjunction with</s>
|
<s>the fu-ture tense, it results into the sense of doubt and/or questioning. Consider (14): 14. bal-te de- ɸ-b-e to tell-prt give-ɸ-fut-3.fut prt Will they allow us to speak? On the basis of this discussion, what we can ar-gue that to is primarily an expression not con-taining anything which is propositional in nature. As a consequence, the meaning construing ca-pacity of it cannot be discussed in terms of the truth conditions. Under this situation what will be of interest is the way we understand the mean-ing construing capacity of to: to as an emphatic indeclinable has the power to change the mean-ing of the propositional content of the sentence within which it is embedded. The appearance of to in a sentence has distinct phonological bearing which is directly connected with the emotional coloring effect. Therefore, a theoretical account of the meaning construing capacity of to should have some component to deal with the phonolog-ical aspect involved with it. 3 Bengali particle to in the light of Pragmatics On the basis of discussion of Section 2, at least two different aspects of to can be talked about: Firstly, during conversation, the indeclinable to plays a crucial role in imposing the illocutionary force on the propositional content of the articula-tion. As a consequence the syntax and semantics of to is not interpreted within the scope of IP (= Inflectional Phrase); rather we do feel IP is dom-inated by the dis-course particles like to. A simi-lar observation is also made by Searle (1969) while explaining the interrelation holding be-tween illocutionary force (= F) and the proposi-tional content (= p). To represent the interaction, Searle proposes the following scheme: F(p). 67Vanderveken (1990) has also supported this pro-posal. Secondly, a point to be noted here regarding the linguistic behavior of discourse particles like to: The meaning construing behavior of discourse particle to is not restricted within the scope of the utterance where it is embedded. Its meaning con-struing behavior often invokes the context for other utterances. This has already been noticed in the discussions of Section 1 and 2. Therefore, to exhaust its meaning construing capacity, an ana-lytical framework should have some provisions. Under this situation, then, what we want to look for in this paper is an unified theoretical account which can take care of aforementioned bilayered meaning construction: In one layer, to as an em-phatic particle will determine the illocutionary aspect of the utterance; and, in other layer it will motivate a move to satisfy the requirements possed by the perlocutionary act of the following utterance. Note the concepts of locution, illocu-tion and perlocution are first proposed by Austin (1975). 4 Discussion While dealing with the problem of to, Bayer et al. (2014) considers Rizzi’s model, proposed in the year of 1997, where the syntactic representa-tion of force is proposed as the highest functional projection: Rizzi argues CP (= Complimentizer Phrase) is composed of ForceP (= Force Phrase), FocP (= Focus Phrase) and TopP (= Topic Phrase) just like the way IP contains information</s>
|
<s>about TP (= Tense Phrase) and AgrP (= Agree-ment Phrase). Rizzi’s proposal in this regard can be summarized in the following figure: Fig. 2. Rizz’s proposal: Pragmatization of Syntax Rizzi’s proposal provides some solution to the incorporation of pragmatic content in the existing framework of syntax. In other words, syntax is now capable enough in taking an account of the utterance. 4.1 Incorporating Illocution To incorporate the illocutionary aspect of an ut-terance, the existing theoretical framework has to undergo certain types of modifications. These modifications will be elaborated now in this sec-tion. Consider the following examples: 15. suśīl-ɸ to ās-ɸ-b-e Sushil-Nom prt come-ɸ-fut-3fut Sushil will come. [Confirming: keu nā eleo, suśīl to āsbe “even if nobody comes, (I do believe), Sushil will come”] 16. suśīl-ɸ ās-ɸ-b-e to Sushil-Nom come-ɸ-fut-3fut prt Will Sushil come? Following Rizzi’s proposal, for (15) we get the syntactic representation of Figure 3. As per this representation, to originates at the Head-FocP position. As an emphatic particle to contains [+emph] feature which belongs to the [+F] class. The DP moves from Spec-AgrSP (= Specifier position of Subject-Agreement-Phrase) to Spec-FocP (= Specifier position of Focus Phrase) in order to get the focus feature checked: Fig. 3. Syntactic representation of 15 68In other words, [+emph] feature belonging to [+F] class feature is attributed to phrase migrated from Spec-AgrSP position to Spec-FocP. The syntactic representation for (16) is can also be provided following the same line of reasoning: Fig. 4. Syntactic representation of 16 As per these representations, to originates in the Head-FocP (= Head position of the Focus Phrase) position with head feature +F. Solution to this specific problem can be generalized over a class of linguistic constructions involving the phenomenon of focusing. The generalization, then, would provide an interpretation (Fig. 5) that Head-FocP attracts the emphasized XP to-wards itself in order to get the +F feature checked; and this in turn remains the sole moti-vation for the movement of emphasized XP to the Spec-FocP position. Fig. 5. Motivation for the movement of the em-phasized phrase in the Spec-FocP position In other words, the proposal creates a motivation for the phrase marked with +F to move out from its original position to a higher node to satisfy the need of interpretation: What remains un-interpreted in its original position becomes com-pletely interpretable due to its movement to the Spec-ForceP position. Till now, the first layer of the bilayered representation discussed in Section 3 is outlined. Rest of this article will now deal with the second layer of the bilayered representa-tion. To address the problem of capturing illocution-ary aspect of an utterance, we will adopt a way similar to the one we have discussed above fol-lowing the proposal developed in Karmakar et al. (2016). As per this proposal the FocP moved to Spec-ForceP position to check the head feature of the ForceP. Note that in (15) the head feature is [+R]; and, in (16) it is [+Dr]. Fig. 6. Capturing illocution To propose an effective way to capture illocution we would</s>
|
<s>like to accommodate the taxonomy of speech acts as is proposed by Searle (1976): As per this proposal, speech acts can be reduced into five main types namely (a) representatives (= [+R] = asserting, concluding etc.), (b) directives (= [+Dr] = requesting, questioning etc.), (c) com-missives (= [+C] = promising, threatening, offer-ing etc.), (d) expressive (= [+E] = thanking, apologizing, welcoming, congratulating etc.), and (e) declarations ( = [+Dl] = excommuni-cating, declaring, christening etc.) 4.2 Conversation in terms of illocution and perlocution Conversation differs from the isolated utterances in several respects: In conversation, utterances often stand in some relation to the other utteranc-es in order to satisfy different degrees of expec-tancies. Conversation is not something static ra-ther it is a dynamic network of different inten-tions. Following Austin, these intentions can be best talked about in terms of different acts – namely locutionary act, illocutionary act and per-69locutionary act. Locutionary act is primarily con-cerned about those facts which are central in making sense in language; Illocutionary act is performed by the speaker to express that inten-tion which is not directly associated with the dis-crete lexicalized content of the articulation; And, perlocutionary act is all about what follows an utterance in a conversation. Following Karmakar et al. (2016), we propose a further split of ForceP into perlocutionary phrase (= PerlocP) and illocutionary phrase (= IllocP) to capture the way different types of speech acts interacts with each other during conversation. In our earlier discussion, we have shown how illo-cutionary act can be handled within the syntactic framework of minimalist program; and, now we are proposing the following scheme of represen-tation for (1) as an exemplar to show how syntax of conversation can be modeled to take an ac-count of the emerging network of intentions dur-ing different turns: Fig. 7. A Minimalist representation of (1) in terms of perlocution and illocution As per this representation IllocP dominating …[XP]+Dr… connected with the IllocP dominat-ing …[XP]+Dl… not under any influence of the illocutionary acts (marked with subscripts +Dr and +Dl respectively) but definitely due to the act of perlocution expected by the utterance of Speaker_1. Also note that, ā appears in the Head-IllocP position and moves to the Head-PerlocP position to satisfy the expectancy of the speech act performed by Speaker_1. This posi-tion is a bit different from what Karmakar et al. (2016) has claimed in their paper. 5 Conclusion Since conversation is the most prevalent form of human communication, a formal study of conversation as an embodiment of complex adaptive system may reveal various in-tricacies involved in the process of conversing. We have attempted one such intricacy to explore which principles and parameters are in work to make a communication meaningful. A little at-tention will reveal the fact that the approach we have argued for encompasses the questions of both “what constitutes the knowledge of lan-guage” as well as “how this knowledge is put to use”. Future research along this line demands a more rigorous characterization of various con-cepts which remain crucial in defining</s>
|
<s>their role in construing the structure of conversation in general. Reference Austin, J. (1975). How to Do Things with Words (2nd ed.). Oxford: Clarendon Press. Bayer, J., Dasgupta, P., Mukhopadhyay, S., & Ghosh, R. (2014, February 06-08). Functional Structure and the Bangla Discourse Particle to. Retrieved July 19, 2015, from http://ling.uni-kon-stanz.de/pages/StructureUtterance/web/Publications_&_Talks_files/Bayer_Dasgupta_MukhopadhyayGhosh_SALA.pdf Chomsky, N. (1986). Knowledge of Language. New York: Praeger. Chomsky, N. (1995). The Minimalist Program (3rd ed.). Massachusets: The MIT Press. Karmakar, S., Ghosh, S., & Banerjee, A. (2016). A Syntactic Framework of Conversational Prag-matics of Bengali. Unpublished manuscript. Rizzi, L. (1997). The fine structure of the left periph-ery. In L. Haegeman, Elements of Grammar (pp. 281-337). Dordrecht: Kluwer. Searle, J. (1969). Speech Acts. Cambridge: Cambridge University Press. Searle, J. (1976). The classification of illocutionary acts. Language in Society , 5, 1-23. Vanderveken, D. (1990). Meaning and Speech Acts (Vol. I). Cambridge: Cambridge University Press.</s>
|
<s>Evaluating Document Analysis with kNN Based Approaches in Judicial Offices of Bangladesh Md. Aminul Islam Department of Computer Science and Engineering Miltary Institute of Science & Technology Dhaka,Bangladesh Email-sumon2907@gmail.com Md. Jahidul Haque Department of Mathematics & Physics North South University Dhaka, Bangladesh Email: jahidul.haque@northsouth.edu Abstract— In this contemporary era of artificial intelligence, machine learning (ML) algorithms are getting significant attention for the analysis of textual analysis. In recent years, operational improvement in different corporate sectors of Bangladesh are achieved by implementing digitization of the process flow instead of using manual paper trails in offices. Nowadays, judicial sectors are included into sate wide digitalization process by archiving the judiciary records. Despite such improvement, autonomic categorizing of documents using textual analysis is not seen in labeling the correct class of a judicial document. In fact, officers spend lots of time in manual labeling of court related document. In our present investigation, we approached a textual analysis tool that can initiate towards the major solution for solving the manual categorization problem within the judicial sector of Bangladesh. Our objective is to label a normalized text document by implementing ML algorithm into suitable class in terms of the case type. In addition, grammatical analysis of English documents is integrated by the natural language processing (NLP) techniques as well as the filtering of feature sets by TF-IDF based term weighting scheme. The outcomes show the important impacts of NLP techniques for generating useful training data in KNN classification algorithm for the categorization of English documents in Bangladeshi judiciary sector. Keywords—KNN; judicial document categorization ; natural language processing;TF-IDF; grammatical analysis;text classification;labeling; I. INTRODUCTION In the judicial sector, performing the daily judicial activities for the continuous services needs a precise classification of suit related documents. Maintenance of the judicial records is always important and, automation of the process can release the huge pressure at the corporate level of judicial sector in Bangladesh. To perform this requisite task, text classification techniques can be crucial for improving the performance in operational activities with essential backup storage through appropriate categorization. Identifying a document using ML classification alongside the NLP can soothe such processes with accurate categorization of suit documents while analyzing the significant segments of the text document with the linguistic analysis is essential to extract the key textual features. As a consequence, clustering of the textual data will be more accurate through the grammatical analysis and retrieval of the significant terms with the analysis of the sentence clause in different levels from a huge number of text corpora [1]. Labeling the words according to parts of speech is a feasible way of feature extraction which will improve the performance of ML algorithm for document categorization [2]. Text documents are included words, numerals and special characters which can be extracted using different text processing methods of NLP including parts of speech tagging, lemmatization, stop word removal, regularization etc. There are various steps which are essential to ensure the accuracy of classification including data extraction and processing to fit for the</s>
|
<s>classifier as well as for identification of the best classification process [4]. The preprocessed textual data set increases the efficiency of the ML classifier whereas the data preprocessing is included feature extraction and feature selection. The important feature can be extracted through the removal of special characters, repeating words and paragraph indention spaces in the document which are irrelevant terms for the classification algorithm. On the other hand, selection of features includes implementation of various NLP techniques such as parts of speech tagging of the terms, term frequency (TF), inverse document frequency (IDF) and weighting schemes for most important terms within the document corpus. The TF is helpful to determine the most frequent word in a text while IDF detects the documents that are associated with a term [4]. In fact, using the combination of TF and IDF will capacitate the feature selection process more stable than other process such as correlation coefficient process [5]. In this paper, KNN classifier is used to categorize the judicial text document according to the types of suits. TF-IDF based weighting scheme is being used to prepare the feature set of KNN. In evident, Trstenjaka implemented a KNN algorithm for the categorization of text data within TF-IDF based document preprocessing [7]. In the feature selection, we analyzed the grammatical structure of the texts according to the English language for identifying the meaningful terms of the document. Later, TF-IDF based weighting scheme applied for Proceedings of the Second International Conference on Computing Methodologies and Communication (ICCMC 2018)IEEE Conference Record # 42656; IEEE Xplore ISBN:978-1-5386-3452-3978-1-5386-3452-3/18/$31.00 ©2018 IEEE 646the most important words of the documents which have higher linguistic impact on the document. A class labeling scheme is also implemented for the feature terms in order to fit the data set for the supervised machine learning algorithm of KNN. II. RELATED WORKS In the field of text categorization, different ML approaches have been implemented for various domains over the past few decades. To improve the efficiency of the machine learning classifier data representation also plays vital role and for this purpose, analysis of the term as well as the determination of the weight of term are the effective techniques of extraction of the features from a text document[3]. Linguistic analysis is a process for extracting data and in [17] part-of-speech tagging process for the textual data for biomedical domain via token centric method has been introduced. Hung and Chen implemented a sentiment analysis in their research where word disambiguation was solved using different NLP approaches [13]. Different ML classifiers such as K-means, Naive Bayes Classifier [12] and Support vector machine (SVM) has been introduced for text classification. Comparison between the Performances of Bayesian classifier and an decision tree algorithm for categorization of the textual data sets has been presented in [8]. Li and Jain [6] has compared the performance of four different methods for classification and the evaluation result reflected that sometimes a single classifier can perform better than combination of two classifiers. Furthermore, other approaches like</s>
|
<s>artificial neural network (NN) can be used in document classification for its high accuracy regardless its high computational cost [2]. A lexical KNN has been used for the classification for medical data in [15] which has a better performance than traditional KNN. For Weighted K means, every feature is provided a weight which reflects its effect and ability in the document [14]. III. FORMULATION OF THE PROBLEM Text documents in judiciary sector are usually archived as computer-generated document format. Officers categorize each document by observing the title of the document and save the document in homogenous location so that they can easily access the files for future purpose. These computer-generated documents can be categorized using artificial intelligence methodologies. In this paper, we used NLP methods for grammatical analysis of sentences within a document and then used a TF-IDF based weighting scheme for creating feature set of the KNN algorithm. The implementation of the problem is developed using python programming language alongside the python’s NLP library NLTK. A. Feature Extraction Feature extraction of text includes removing special characters and stop words in a document. Feature extractor outputs a feature vectors having full form of sentence or clause. We used a python program to collect each sentence in a paragraph or table based on the indention. Then a feature extractor program uses to extract the all the words from the document including all special character. After that feature extractor filters all the special characters from the token list. The new token list contains the words. Then the program calculates the parts of speech of each token. Parts of speech tagging for each token are used for the feature selection method where every token is analyzed in terms of their position and impact in the sentence using the feature selection module presented in Fig. 1.and new terms are created for the ML dataset. Figure 1: The Implementation stages of the document classification B. Feature Selection Feature Selection method is the dimension reduction method for reducing the unnecessary terms within the Feature Extraction Stop Word Removal/ Paragraph Detection Parts of Speech Tagging Feature Selection Noun/Verb/Adjective/Prepositional Phrase Detection Modified TF-IDF Term weighting Sentence Structure Analysis K-nearest Neighbors Data Set Creation Training the classifier Predicting the class of atest document Assigning Classes of each terms Proceedings of the Second International Conference on Computing Methodologies and Communication (ICCMC 2018)IEEE Conference Record # 42656; IEEE Xplore ISBN:978-1-5386-3452-3978-1-5386-3452-3/18/$31.00 ©2018 IEEE 647document to create better data set for the ML classifier. The feature selection has two major parts in this implementation: a)Linguistic analysis: Linguistic analysis finds the proper meaning of the terms within a document. The system finds the words each sentence and then create terms based on the linguistic point of view. Different NLP technique such as part of speech (POS) tagging and chunking can be implemented to analyze the data at [9] whereas in [10] N-gram representation and similarity of the longest common chunk has been represented to analysis the common feature between the text datasets.The most important</s>
|
<s>feature among the words in a sentence are the subject and predicate. Subject mainly consists the Noun phrase and predicate contains verbal phrase, adjective phrase and preposition phrase. In [11] noun phrase chunking with filtering process has been implemented to analyze the text data and we also analyzed clause from the sentences with the tokenization process for the terms.Subjects are the most important term in the document vector and thus hold highest preference among other phrases. Fig. 2 presents the output of the program that identifies the phrase which are : 1. Noun Phrase (NounPhrase) 2. Verbal Phrase (VerbPhrase) 3. Adjective Phrase (AdjectivePhrase) 4. Prepositional Phrase (PPhrase) The preference of the phrase terms can be calculated using the of preference order where the phrase with the highest length is given to the highest value. The linguistic weight of a term (t) in a k document can be computed with the following equation: ( ) (1) b) Modified TF-IDF weighting scheme:After the setting the linguistic weight of the terms, a term weighting scheme is applied in the set of all the terms using a modified TF-IDF approach of the terms where the linguistic feature value is taken into consideration. For a term t in a document d in document set D, the term frequency is calculated as follows: ( ) ∑ ( ) ( ) (2) (3) Here, is the weight of the term t in kdocument where is the document relevance in the whole document set D. The equation for the document relevance can be computed as follows: ∑ ( ⃑ ⃑ )i ; N=Number of total documents (4) T The frequency of the term denotes the term TF and TF-IDF is a process which is used to provide weight to each term to denote how much effect of the term reflects in the text[7].We used here the product of TF (Term frequency) and IDF (Inverse document frequency) to provide weight each term and select the important term to ease the performance of the classifier.Again, the Inverse document frequency of the terms can be computed using (5) which is also used at [16]. In addition, the combined weight for a term t using both TF and IDF are as follows for this problem domain: (5) ( ) ( ) (6) ( ) ( ) ( ) (7) a) Labeling the feature terms: The collected feature sets are then labeled with the normalized term weight ( ). The labels of the collected set are termed as the class vector of the ML algortihm. The labels are valued at the normalized term weight ( ).Some instances of feature terms in a document after processing is represented in Fig. 3. Feature Name Feature class 'the criminal case' 'criminal', 'case', 'section', 'submitted in police station' 'Investigation ', 'section', 'case', 'section 154' 'Information', 'section', 'case', Date of submission 'date', 'informer', 'case', Place of incident 'criminal', 'police', 'case', Date of report sent 'informer', 'submitted', 'report', 'Union-khajnogor' 'information', 'district', 'case', Figure 3: Instances of the Feature sets</s>
|
<s>in the document vector. Proceedings of the Second International Conference on Computing Methodologies and Communication (ICCMC 2018)IEEE Conference Record # 42656; IEEE Xplore ISBN:978-1-5386-3452-3978-1-5386-3452-3/18/$31.00 ©2018 IEEE 648C. ML Classification The Machine learning algorithms are essential for categorization of any feature vector. In our present analysis, we used K- nearest neighbor algorithm for implementing the classification of the English judiciary documents. For KNN, feature vector ( )is assigned to a class vector ( ) to classify a document D. For a given value of K, the algorithm finds its nearest neighbors having similar weight values. IV. RESULTS AND DISCUSSION The implementation of the document classification includes the measuring the feature set weights and labeling the features to new classes for feeding the ML classifier. In this project, NLTK library is used for using NLP techniques. In addition, scikit-learn library is also used for implementing classification model. In Data preprocessing stage, grammatical structure analysis is performed on raw data of the document presented in Fig.2. Then the words are grouped together into terms to fit in the ML classifier. Some of the instances of the phrases are given in Fig 3. Label Output Sentence A with POS tagging after removing special characters [('Information', 'NN'), ('for', 'IN'), ('the', 'DT'), ('criminal', 'JJ'), ('case', 'NN'), ('under', 'IN'), ('section', 'NN'), ('154', 'CD'), ('submitted', 'VBN'), ('in', 'IN'), ('police', 'NN'), ('station', 'NN')] Sentence A after Phrasing [([('Information', 'NN')], 'NounPhrase'),([ ('the', 'DT'), ('criminal', 'JJ'), ('case', 'NN')], 'NounPhrase'),([ ('submitted', 'VBN'), ('in', 'IN'), ('police', 'NN'), ('station', 'NN'), 'VerbPhrase']] Sentence A phrases with phraseLength = 1 [([('Information', 'NN')], 'NounPhrase')] Sentence A phrases with phraseLength> 1 [[('the', 'DT'), ('criminal', 'JJ'),('case','NN')],'NounPhrase')] [('section', 'NN'),('154', 'CD')], 'NounPhrase'] [('submitted', 'VBN'),('in', 'IN'), ('police', 'NN'),('station', 'NN'), 'VerbPhrase'] Figure 2: Identifying different types of phrases After determining the terms, the weight of each term is calculated in (7). The calculated weight and classes of each term are then transformed to a normalized vector to use as the dataset of ML classifier. The accuracy of the KNN algorithm is calculated of both the weighted data set and un-weighted dataset. Table 1 shows the comparisons of some of the data sets of UCI dataset repository [18] and present data set. Data Name Normal KNN KNN(Present) Eco-Hotel 0.04092 0.03488 Legal Case Reports Data Set 0.25714 0.22456 Present 0.30335 0.25345 Table 2: Comparison of datasets for the KNN algorithm. V. CONCLUSION The importance of data preprocessing in document classification is very vital for getting good classification accuracy which are presented in results section. But, unfortunately KNN often fails to classify at higher accuracy. In order to get more good results other machine learning approaches like support vector machine (SVM), Naive Bayes Classifier can be used in context of term weighting scheme for the determination of feature set creation. Court-related document can be easily categorized using our proposed method to enhance the operational efficiency in the judiciary sector of Bangladesh. ACKNOWLEDGMENT The authors gratefully acknowledge Wali Mohammad Abdullah, Military Institute of Science & Technology, Bangladesh for his valuable suggestion in this research project. FUTURE WORK</s>
|
<s>In this paper ML classifier is implemented to classify the English documents of the judicial system of Bangladesh. Our future work is to use ML classifier to classify the judicial documents of Bangladesh which are written in the Bengali language. Furthermore, future work will include the analysis of the performance of different classifiers REFERENCES [1] Thaoroijam, Kabita. "A Study on Document Classification using Machine Learning Techniques." International Journal of Computer Science Issues (IJCSI) 11.2 (2014): 217. [2] Khan, Aurangzeb, et al. "A review of machine learning algorithms for text-documents classification." Journal of advances in information technology 1.1 (2010): 4-20. [3] Sebastiani, Fabrizio. "Machine learning in automated text categorization." ACM computing surveys (CSUR) 34.1 (2002): 1-47. [4] Brücher, Heide, Gerhard Knolmayer, and Marc-André Mittermayer. "Document classification methods for organizing explicit knowledge." (2002). [5] Wei, Chih-Ping, and Yuan-Xin Dong. "A mining-based category evolution approach to managing online document categories." System Sciences, 2001. Proceedings of the 34th Annual Hawaii International Conference on. IEEE, 2001. [6] Li, Yong H., and Anil K. Jain. "Classification of text documents." The Computer Journal 41.8 (1998): 537-546. [7] Trstenjak, Bruno, Sasa Mikac, and Dzenana Donko. "KNN with TF-IDF based Framework for Text Categorization." Procedia Engineering 69 (2014): 1356-1364. [8] Lewis, David D., and Marc Ringuette. "A comparison of two learning algorithms for text categorization." Third annual symposium on document analysis and information retrieval. Vol. 33. 1994. [9] Rafiei, Javad, et al. "Source Retrieval Plagiarism Detection based on Noun Phrase and Keyword Phrase Extraction." Proceedings of the Second International Conference on Computing Methodologies and Communication (ICCMC 2018)IEEE Conference Record # 42656; IEEE Xplore ISBN:978-1-5386-3452-3978-1-5386-3452-3/18/$31.00 ©2018 IEEE 649[10] Sánchez-Vega, Fernando, et al. "Determining and characterizing the reused text for plagiarism detection." Expert Systems with Applications 40.5 (2013): 1804-1813. [11] Bui, Duy Duc An, Guilherme Del Fiol, and Siddhartha Jonnalagadda. "PDF text classification to leverage information extraction from publication reports." Journal of biomedical informatics 61 (2016): 141-148. [12] Chen, Jingnian, et al. "Feature selection for text classification with Naïve Bayes." Expert Systems with Applications 36.3 (2009): 5432-5435. [13] Hung, Chihli, and Shiuan-Jeng Chen. "Word sense disambiguation based sentiment lexicons for sentiment classification." Knowledge-Based Systems 110 (2016): 224-232. [14] Gao, Yunlong, and Feng Gao. "Edited AdaBoost by weighted kNN." Neurocomputing 73.16 (2010): 3079-3088. [15] Jindal, Rajni, and Shweta Taneja. "A Lexical Approach for Text Categorization of Medical Documents." Procedia Computer Science 46 (2015): 314-320. [16] Sabbah, Thabit, et al. "Modified frequency-based term weighting schemes for text classification." Applied Soft Computing 58 (2017): 193-206. [17] Barrett, Neil, and Jens Weber-Jahnke. "A token centric part-of-speech tagger for biomedical text." Artificial intelligence in medicine 61.1 (2014): 11-20. [18] [1]"UCI Machine Learning Repository: Data Sets", Archive.ics.uci.edu, 2017. Proceedings of the Second International Conference on Computing Methodologies and Communication (ICCMC 2018)IEEE Conference Record # 42656; IEEE Xplore ISBN:978-1-5386-3452-3978-1-5386-3452-3/18/$31.00 ©2018 IEEE 650 2018-10-05T19:12:13-0400 Preflight Ticket Signature</s>
|
<s>Performance Analysis of Supervised Machine Learning Approaches for Bengali Text CategorizationPerformance Analysis of Supervised Machine Learning Approachesfor Bengali Text CategorizationRonald Tudu∗, Shaibal Saha, Prasun Nandy Pritam, Rajesh PalitDepartment of Electrical and Computer Engineering, North South University, DhakaEmail: ∗ronald.tudu@northsouth.eduAbstract—In this digital era, enormous amount of data arebeing generated everyday, and most of them are unstructuredtextual data. An automated text classifier helps to categorize thetexts automatically into pre-defined categories. With the helpof machine learning we can learn about the features of pre-categorized documents and predict document’s category. Bengalilanguage is one of the most spoken languages in the world. Ithas become essential to implement automated text categoriza-tion for Bengali language. Text categorization mostly uses datamining algorithms along with NLP tools, feature extraction andselection methods with vector space modeling. In this paper,we have measured the performance of Support Vector Machine(SVM), Multinomial Naive Bayes (MNB), Stochastic GradientDescent (SGD) and Logistic Regression (LR) methods using anopen source Bengali newspaper article corpus containing 84, 906articles of 10 categories. The impact of the size of the trainingdataset on the accuracy of the classification was examined fordifferent algorithms. We have documented the execution time totrain the methods and discussed issues and challenges in Bengalitext categorization. This paper can be used as a reference workfor future researchers in Bengali text categorization.Index Terms—Text categorization, Machine learning, Bengali,Performance analysisI. INTRODUCTIONAutomated text categorization is one of the emerging subjectin text mining. The need for text categorization is increasingdue to the massive textual digital data, which is growingin an exponential rate. It is estimated that the amount ofunstructured data will be 40 zeta-byte by 2020. Machinelearning approaches are used to categorize text with otherseveral data mining algorithms. According to Islam et al. [9],the accuracy rate of classifiers increases as the size of trainingdata increases. Text categorization is used in many applicationssuch as content tagging, spam filtering and business intelli-gence.Bengali is spoken in Bangladesh along with two Indianstates West Bengal and Tripura. The number of Bengalispeakers is approximately 200 million. There are about 80.83million Internet users at the end of January 2018 [1], and 30million of them are labeled as social media users (2018) [2] inBangladesh. The need for text categorization for the Bengalilanguage is appealing for researchers and scientists to analyzemass opinion, perspective, and detecting a subject of interestin a conversation. The amount of information available onthe web is tremendous and increasing at an exponential rate.Lots of work have been done in sentiment analysis in differentlanguage especially in English and Indonesian to analyzecyber-bullying in text [19]. There are also many works donein Indian regional languages [16].As a necessity of text mining is increasing, the researchersof Bangladeshi or some Indian researchers focused on develop-ing applications using text mining. Thus a number of researchwork have been done in Bangla text mining too. Name entityrecognition [3] is one of them where they categorized the nametag from articles. Another work used a supervised learningmethod for Bangla web document categorization. The mainproblem is all the researches in Bangla text mining are basedon theory, but there is less application in</s>
|
<s>this sector. There arenumerous works on sentiment analysis in Bengali, some of theresearch work shows good results with good accuracy. Mostof the research works use Romanized Bangla text using deepmodel recurrent model. There are some works where TF-IDFwas mixed with SVM to see the performance of the model [9].There are lots of data on the web in Bengali, it becomesvery difficult to find data of interest in the web. When youwant to post a question in a forum or a social media site itbecomes necessary to categorize the text for the viewers to getmaximum output. On the other hand, if you want you to postto be observed in a forum it becomes vital and categorizesthe text and hash-tag the keywords. The problem is Bengalitext categorization is not much done before in an applicationplatform, but it is essential for Bengali people to categorizethe status for people wellness. It is hard to summarize themost suitable summarize topic name for any types of writing.The main technical challenges and issues are working withunstructured data like texts. Pre-processing is the most difficulttask in this project since there are very rare NLP tools to pre-process Bengali texts, like stemmer and selecting the best stopwords. Working with a very big data set is also a challengebecause we have to be careful about over-fitting.In this paper, we have presented the performance analysisof four supervised text categorization techniques on Bengalinewspaper articles. The accuracy rates of the classifiers ondifferent size of datasets were examined and the variation ofaccuracy rates of the classifiers were observed with respect tosize of the datasets. Confusion matrices for each classifier wasbuilt to see the miss-classification among the categories. Wehave included supervised learning classifiers, namely SupportVector Machine (SVM), Multinomial Naive Bayes (MNB),Stochastic Gradient Descent (SGD) and Logistic Regression(LR) methods in the experiments. According to Jindal etal., [4], the most popular methods are Support Vector Machine(SVM) [5] followed by K-Nearest Neighbor (KNN) and Naive2212018 5th Asia-Pacific World Congress on Computer Science and Engineering (APWC on CSE)978-1-7281-1390-6/19/$31.00 ©2019 IEEEDOI 10.1109/APWConCSE.2018.00043Bayes (NB). For vectorization of the text, TF-IDF and N-gramapproaches were used in this analysis.We have considered a dataset consisting of 84,906 Ben-gali newspaper articles of 10 categories. The details of thedataset is given in Section IV. The preliminary work was pre-processing and vectorizing the data. Secondly, we trained theclassifiers with the datasets and evaluated the accuracy. Thefinal step was to predict category which is the goal of textcategorization.The performance of different classifiers such as SVN, KNN,SGD, NB, and DT was investigated earlier, however, theauthors in those works used smaller datasets. The authorsin [9] concluded that by raising the size of training datasets,the accuracy of the classifier can be reached 100%. Althoughthis trend is seen in machine learning approaches, in ourexperiment of Bengali text categorization we observed thatusing large training dataset overfits the classifier, and theaccuracy reduces. The contribution of the authors in this papercan be documented as follows.• We have compared performances of popular classifiersfor Bengali text categorization;• In the experiment a large sample dataset containing morethan 84</s>
|
<s>thousand articles was used;• It was observed that the classifiers overfit with the trainingdata due to the large number features in big datasets.An effective stemmer is essential for Bengali language toreduce the number of redundant features;• Our analysis concludes that using the knowledge of theconfusion matrix a textual document can be tagged withmultiple categories.The paper is organized as the following order. Section IIcontains a discussion on the previous research works in thisfield. The methodology of conducting the experiments is givenin Section III. The results are presented and analyzed inSection V. The discussions about the confusion matrix ofall four classifier are also given in this section. Section VIdiscuss about future work and Section VII concludes thisresearch work.II. RELATED WORKWhile reviewing the literature of text categorization, weobserve that many research works have been done for Englishlanguage and other languages like Punjabi [6]. A few workshave been done for Bengali language. Some prominent super-vised techniques such as SVM, K-Nearest Neighbor (KNN)and Decision Tree (DT) are widely techniques and we discusssome of the works in this section.The authors in [7] used a number of supervised learningmethods for Bangla Web Document categorization. The paperautomatically described the sort of category from a predefinedset. They analyzed five categories with a dataset of onethousand records. The authors explored four classifiers namedSupport Vector Machine (SVM), K-Nearest Neighbor (KNN),Naive Bayes (NB), Decision Tree (DT). In this paper, thecategories are Business, Sports, health, Technology, Education.About 90% of the total dataset are used as a training data.The total number of token was 22,218. The authors aimed tofind the best classifier for Bengali web document among fourclassifiers and concluded that SVM gives highest accuracy.The authors in [8] narrated Bengali document text catego-rization with stochastic Gradient Descent (SGD) technique.They designed seven experiments to compare their proposedmethod with others. The paper divided their methods intothree categories. They conducted their experiments with 9,127records of 9 categories. Approximately 60% datasets wereused in the training datasets and the rest were used in testdatasets. The paper also indicates the confusion matrix of dif-ferent classifiers which authors used to compare their methods.The authors find that the value of F1 score (i.e. measure oftest accuracy) is higher in SGD than the other approaches.The Term Frequency-Inverse Document Frequency is usedfor calculating the frequency of a word in a document. Islamet al. in [9] used TFIDF weighted length normalization forfeature selection after the pre-processing of the dataset isfinished. The authors used 31,906 records of 12 categories.They examined the accuracy of the SVM approach withdifferent number of datasets where the highest number was31,906 and the lowest was 3,191 in the training dataset. Inthis paper the authors also try to calculate recall, precisionand F1-measure of all 12 categories. In their paper, they alsoclaimed that with the large number of dataset, they can touchthe accuracy of 100%.Chy, Seddiqui and Das [10] used a Bengali news classifierwith Naive Bayes approach. They developed their own crawlerto extract the news articles from web pages. Naive Bayesclassifier is used in text classification because of its simplicity.In this paper the authors</s>
|
<s>build their own steps to classifythe news documents. The authors also used full text RSS,then pre-process their data with tokenization. The authors useTREC [11] evaluation technique to produce recall-precisiongraph.The authors in [12] stated that they use 4,000 data in 8categories where 500 data are available in each category. Afterapplying several pre-processing methods, the number of tokenshave been taken 9,57,623. The paper described the applicationof TF-IDF-ICF feature with dimension reduction technique.They also indicated the change in accuracy rates after applyingreduction techniques.Jia and Mu [13] described a classification system with largecorpus in Chinese language. The authors also used 6 categoriesand apply SVM techniques in Chinese language. Among thosedata, the authors use 50% as training data. They find a highscore of F1 measurement in Chinese language. There arealso some works done on N-gram based Bengali news textcategorization [14].After reviewing many paper on Bengali text classification,we found that there are no such work in large dataset as weused in this work. There are many technical challenges in-volved in large datasets. We observe those technical challengesand document in this paper.222III. METHODOLOGYThe process of Bengali text categorization has been de-scribed in this section. The flowchart of the methodology isgiven in Fig. 1. The whole process can be grossly divided intofour steps as follows [4].Fig. 1. Flow chart of methodology followed.• Dataset selection• Pre-processing• Vectorization• Classifier selection• EvaluationA. Dataset SelectionDataset selection plays a very vital role in supervised ma-chine learning approaches for categorization or classification.The quality and the quantity of data shapes the goal to predicta text to the right class. Newspaper articles are rich in informa-tion and easy to create a dataset for learning purpose. We haveselected an open source Bengali corpus. This corpus is createdfrom Bengali newspapers articles, and consists 12 categories,and among them we have used 10 in our work. The categoriesare Accident (AC), Crime (CR), Economics (EC), Education(ED), Entertainment (EN), Environment (EV), International(IN), Politics (PO), Science & Technology (ST), and Sports(SP).The two other categories, Art and Opinion were not includedas they are more prone to miss classification for the mixtureof different content in these two categories. This dataset canbe collected online at https://scdnlab.com/corpus/. The datasetis licensed under MIT and free to be used by anyone. Thisdataset is the result of a thesis conducted under the Departmentof Computer Science and Engineering in Shahjalal Universityof science and technology, Bangladesh.B. Pre-processingIn this step, our goal is to remove noises from the data. Asthe texts are very unstructured way to represent informationand contain noise, extraction of features from texts is verymuch challenging. There are some basic ways to pre-processtext to preserve only relevant pieces of information. Wedivided the pre-processing task into three steps as follows.• Punctuation removal• Stop-words removal• Stemming1) Punctuation removal: All the punctuation are eliminatedfrom the documents they play a very little or no role incategorization. They can be considered as noise. We have alsoremoved Bengali numerals and special characters for the samereason. Some of the examples of the punctuation, numerals,and special characters that we removed are given below inFig. 2.Fig. 2. List of removal</s>
|
<s>symbols.2) Stopword Removal: In computing, stop words are wordswhich are filtered out before or after processing of naturallanguage data (text). Though stop words usually refer tothe most common words in a language, there is no singleuniversal list of stop words used by all natural languageprocessing tools, and indeed not all tools even use such alist. Some tools specifically avoid removing these stop wordsto support phrase search. In our case, we try to filter outall the less contributing words in the Bengali language thathelps the classifier to predict classes with less noise. Wecreated a list of 501 stopwords for the Bengali language.The list contains most common Bengali words which includedeterminers, prepositions and coordinating conjunctions. Thelist is given below in Fig. 3.Fig. 3. List of stop words.3) Stemming: Stemming is the process of reducing a wordto its root form. Stemming is used for reducing differ-ent derivational or inflectional variants of the same wordto increase effectiveness and efficiency of information re-trieval [17]. It is an essential part in natural language un-derstanding (NLU) and natural language processing (NLP).An example of stemming is given in Fig. 4 in the context of223text categorization. For this research, we used a light-weightstemmer.Fig. 4. Example of Bengali Stemming.C. VectorizationVectorization is the process to represent documents in vectorspace. The process is to create mapping from term to term. Itis called term because sometimes it can be arbitrary n-grams.In vectorization, each row represents a document and eachcolumn represents a term. Vectorization counts the numberof term occurred in a document and represent it as a matrix.Vectorizing can be done in two ways: using vocabulary orfeature hashing.D. Classifier selectionSupport Vector Machine (SVM) is a supervised learningmethod, which can be used in classification problems. InSupport Vector Machine classification, the dataset is plottedin n-dimension where each feature represents a value of acoordinate. Then a logical hyper-plane is drawn that dividesclasses. Since textual data are linearly separable this algorithmworks well in text categorization [18]. Among the popularSVM kernels like polynomial and RBF, we have used linearkernel as most of the text categorization problems are linearlyseparable [18]. Linear kernel has less parameters to optimizeand works well with a lot of features. The performance of theclassifiers based on other kernels do not increase with higherdimension space.Naive Bayes is a popular probabilistic approach amongclassification problems. It basically works on independentconditional probability rather than the particular distribution ofeach feature. Multinomial Naive Bayes works on multinomialdistribution. It works well for countable data such as wordcounts in texts.In Stochastic Gradient Descent (SGD), a batch is the totalnumber of examples uses to calculate the gradient in a singleiteration. Stochastic indicates comprising each batch is chosenat random. Stochastic gradient descent uses only one singleexample iteration. Although given enough iterations SGDworks but it is exceedingly noisy.Logistic Regression (LR) is used when the dependent vari-able is dichotomous. Like other regression analysis it is used asa predictive analysis. It is used to describe data and to clarifythe relationship between one dependent binary variable andone or more ordinal, nominal or interval independent variables.E. EvaluationThe following metrics were used</s>
|
<s>in analyzing the perfor-mance of the classifiers in our experiments.• Recall: Recall refers to as sensitivity. Recalls are the frac-tion of relevant documents that are successfully retrieved.Recall is calculated by the number of true positive dividedby the number of true positive and false negative.• Precision: Precision related to positive predictive value.Precision is calculated by the number of true positivedivided by the number of true positive and false positive.Precision is used to measure true positive accuracy of adocument.• F1 score: F1 is measured to find the accuracy rate of aclassifier on a test dataset. F1 is the weighted average ofprecision and recall.• Confusion Matrix: Confusion matrix is most unambigu-ous way to represent a prediction result of a classi-fier. Confusion matrix represents the number of miss-classified data between two categories.IV. EXPERIMENTAL SETUPAs mentioned in Section III, a Bengali corpus [15] wasbuilt under a thesis work in the Department of ComputerScience and Engineering, Shahjalal University of Science andTechnology. This corpus is open for public use and wasused in the experiments. There are 12 categories, however, 2categories, Art and Opinion were excluded in this experiment.1) Dataset Representation: Table. I shows the total numberof samples datasets used in the experiments. From all samples,90% data are used as training data and the rest are used astest data. The total sample size is 84,906, and four datasetswere made consisting of 10,027, 42,370, 60,000 and 84,906data. In every dataset, the percentage of training and test dataremained the same.Category Samples Training Set Test SetCrime (CR) 8565 7709 856Economics (EC) 3445 3101 344International (IN) 5151 4636 515Sports (SP) 11888 10700 1188Accident (AC) 6324 5692 632Environment (EV) 4308 3878 430Science and Technology (ST) 2901 2611 290Entertainment (EN) 10093 9084 1009Politics (PO) 20038 18035 2003Education (ED) 12193 10974 1219TABLE ITRAINING & TESTING DATASET2) Specification of the computer: A Windows 10 64 bitbased desktop PC was used in the experimentation with IntelCore i7 3.6 Ghz 64 bit processor, 16 GB DDR4 RAM, 512GB SSD storage, NVIDIA GeForce graphics card.V. RESULTSIn this section, the accuracy rates of SVM, MNB, SGDand LR classifiers are compared for different training and testdatasets. Then the impact of the size of the training datasetson the classifiers is discussed. It is followed a comparison bythe run-time or required time for training the classifiers. Theconfusion matrices for each classifier are then discussed toshow miss-classifications among categories.The accuracy rates of all four supervised techniques aregiven in Fig. 5. The highest rate of accuracy is achieved for the224Fig. 5. Accuracy rates of SVM, MNB, SGD, LR classifierssmallest dataset with 10,000 articles which is 93.3%. SupportVector Machine (SVM) algorithm shows higher accuracy ratesin all four different sizes of datasets. In contrary, MultinomialNaive Bayes (MNB) shows the less accuracy rates with75.16%. The accuracy rates vary 2% to 3% after applyingdimension reduction. For example, SVM gives 87.5% accuracyrate without applying dimension reduction, but it gives 89.2%accuracy rate when dimension reduction is applied.Fig. 6. Accuracy rates of classifiers with varying training data size.Fig. 6 shows the decreasing rate of accuracy with respect tothe increasing size of the</s>
|
<s>training data for all four classifiers.The main reason for this decreasing accuracy rate pattern isover-fitting of data. Due to higher number of features, all fourclassifiers show less accuracy with big training data than thesmall training dataset. For example, SVM shows 93.3% with10,027 training dataset, but gives 89.2% with 84,906 trainingdataset.The execution times of all four classifiers for differentdatasets is shown in Fig. 7. Multinomial Naive Bayes (MNB)always shows the least run-time due to its simple formof classification. Multinomial Naive Bayes gives only 0.23minute with full datasets where SGD and LR spends above4 min. These processes were executed in a high-end desktopPC and the configuration is given in Section IV for reference.The confusion matrix for Support Vector Machine (SVM)classifier is given in Table II. Compare to other confusionFig. 7. Run-time Comparison of SVM, MNB, SGD, LRCR EC IN SP AC EV ST EN PO EDCR 748 1 7 9 16 14 4 15 58 13EC 3 332 4 3 0 1 3 1 20 1IN 3 4 518 5 4 1 3 6 5 7SP 8 0 7 1091 4 12 1 17 11 20AC 5 0 1 9 573 9 3 8 10 5EV 8 2 3 6 11 296 0 11 42 29ST 2 1 11 3 0 1 236 10 1 4EN 13 1 5 18 4 7 11 881 24 26PO 32 3 7 27 11 44 4 36 1773 103ED 20 3 4 15 10 37 3 20 83 986TABLE IICONFUSION MATRIX FOR SVMmatrices, this matrix clearly shows less number of conflictsbetween categories. The highest conflict is between Politicsand Education categories because there are overlapping fea-tures between them. Overall, the conflict in SVM classifier isnegligible, and it is evident in the accuracy rate of SVM.CR EC IN SP AC EV ST EN PO EDCR 551 5 5 18 46 9 11 51 36 26EC 1 239 2 2 1 3 3 2 16 8IN 15 8 423 9 17 8 5 10 10 23SP 5 2 29 1019 5 16 3 19 14 40AC 96 0 4 23 486 14 5 10 24 19EV 5 3 2 3 17 157 0 5 14 19ST 3 10 12 3 2 0 198 10 1 17EN 21 9 22 27 7 14 13 771 16 36PO 113 51 54 58 35 129 15 83 1795 263ED 32 20 14 24 17 72 15 44 101 743TABLE IIICONFUSION MATRIX FOR MNBTable III indicates the confusion matrix of MultinomialNaive Bayes (MNB) classifier. MNB classifier has many majorconflicts between two categories. Politics has overlapped withEducation category 263 articles. There are many categorieswhich overlap with each other in MNB classifier. MNB alsohas higher conflict rate between Accident and Crime or Politicsand Crime categories.Table IV refers to the confusion matrix of Stochastic Gra-dient Descent (SGD) classifier. In some particular categories,SGD confusion matrix shows no conflict between two cate-gories. Similar to Table II & Table III, SGD also showsconflict between Politics and Education, but the rate is higherthan the other two. Table V indicates</s>
|
<s>the confusion matrix of225CR EC IN SP AC EV ST EN PO EDCR 556 0 40 12 20 7 10 40 10 14EC 3 229 4 2 1 3 4 0 6 1IN 6 1 231 1 2 3 5 2 0 3SP 15 9 54 1094 19 33 18 38 22 64AC 44 3 10 10 525 16 5 3 11 9EV 1 2 1 0 6 117 1 0 3 1ST 0 1 5 0 0 0 149 0 0 1EN 13 16 21 21 4 11 30 803 6 21PO 178 77 182 43 50 183 4 36 1948 265ED 26 9 19 3 6 49 3 20 21 785TABLE IVCONFUSION MATRIX FOR SGDCR EC IN SP AC EV ST EN PO EDCR 739 0 9 7 18 14 9 25 34 16EC 2 315 3 3 0 4 3 0 19 1IN 5 4 495 1 7 4 6 5 6 6SP 6 0 16 1102 4 13 2 12 13 30AC 8 0 1 9 562 10 5 5 10 8EV 2 3 3 2 11 273 0 2 13 17ST 0 4 8 2 0 2 208 5 0 4EN 16 1 12 22 4 7 20 878 14 23PO 45 14 13 32 18 53 11 47 1861 106ED 19 6 7 6 9 42 4 26 57 983TABLE VCONFUSION MATRIX FOR LRLogistic Regression (LR) based classifier. Logistic regressionis basically used for statistical analysis. So the conflicts amongthe categories in LR is low which allows better accuracy ratesthan the MNB & SGD classifiers.VI. FUTURE WORKDuring the experiment, we observed that the accuracy ofthe categorization tools was decreasing with the increaseof training data size. This is one of the most interestingobservation we found while categorizing a lot of Bengalitextual data. It was happening due to over-fitting of the modeland that can be mitigated by reducing number of features inthe training dataset. Another reason for over-fitting was theoverlapping of features which creates ambiguity and a reasonfor miss classification. In Bengali, a word can be present in alot of different form and those words need to be reduced totheir root words to avoid redundant features. This task shouldbe done during pre-processing phase and termed as stemming.There is a lack of effective stemmer for Bengali language andit can be considered as the future work of this paper. A text canalso fall into multiple categories. After analyzing the confusionmatrix, meaningful insights can be extracted and documentscan be double categorized.VII. CONCLUSIONSText categorization is getting importance as there has beentremendous advancement in the field of machine learning.Thus there is a strong necessity to examine the existingstrategies for Bengali language. This research is mainly cate-gorization of the Bengali newspaper articles to find the topicof large scale document. In the context of social networks,categorization helps people to profile people according to theirinterest. Bengali text classification application is new researcharea for Bengali language. So, it will be very helpful for Ben-gali literature or web to analyze data and get information fromweb. In this</s>
|
<s>paper, We analyzed four algorithms, and documenttheir performances. Only supervised learning method wereconsidered in this paper. Among all the 4 algorithms, SupportVector Machine gives the highest prediction accuracy. Onaverage 87.5% accuracy was achieved from SVM. Thus SVMwas the best algorithm among all with the higher accuracy rateand with average training time. There were some limitations ofresources the authors face while conducting this research work.In the pre-processing phase, a good stemmer was very muchnecessary for reducing the number of features and to avoidover-fitting of the training dataset. The findings in this researchwork will provide information and assist future researchers onthis area.REFERENCES[1] “Bangladesh Telecommunication Regulatory Commission,” [Online].Available: http://www.btrc.gov.bd/ content/internet- subscribers-bangladesh- january-2018. [Accessed 25 10 2018].[2] “The Financial Express,” [Online]. Available:https://thefinancialexpress.com.bd/sci-tech/ social-media-users-30-million-in-bangladesh-report-1521797895/. [Accessed 25 10 2018].[3] A. Senapati, A. Das and U. Garain, “Named -Entity Recognition inBengali,” in Post-Proceedings of the 4th and 5th Workshops of theForum for Information Retrieval Evaluation, 2013.[4] R. Jindal, R. Malhotra and A. Jain, “Techniques for text classification:Literature review and current trends,” 2015.[5] C. Cortes and V. Vapnik, “Support Vector Networks.”Machine Learning,vol. 20, no. 3, pp. 273-297, 1995.[6] V. Gupta and G. Lehal, “Automatic Punjabi Text Extractive Summariza-tion System,” in Proceedings of COLING 2012, 2012.[7] A. K. Mandal and R. Sen, “Supervised Learning Methods for BanglaWeb Document categorization,” International Journal of Artificial Intel-ligence & Applications, vol. 5(5), pp. 93-105, 2014.[8] F. Kabir, S. Siddique, M. R. A. Sabbir and M. N. Huda, “Banglatext document categorization using Stochastic Gradient Descent (SGD)classifier,” 2015 International Conference on Cognitive Computing andInformation Processing(CCIP), 2015.[9] M. S. Islam, F. E. M. Jubayer and S. I. Ahmed, “A support vectormachine mixed with TFIDF algorithm to categorize Bengali document,”2017 International Conference on Electrical, 2017.[10] A. N. Chy, M. H. Seddiqui and S. Das, “Bangla news classificationusing naive Bayes classifier,” 2014.[11] E. M. Voorhees and D. K. Harman, “TREC: Experiment and Evaluationin Information Retrieval,” in MIT Press Cambridge, 2005.[12] A. Dhar, N. S. Dash and K. Roy, “Categorization of Bangla Web TextDocuments Based on TF-IDF-ICF Text Analysis Scheme,” 52nd AnnualConvention of the Computer Society of India, 2018.[13] Z. Jia and J. Mu, “Web Text Categorization for Large-scale Corpus,”2010 International Confrence on Computer Application and SystemModeling (ICCASM 2010), 2010.[14] M. Mansur, “Analysis of N-Gram based text categorization for Banglain a newspaper corpus,” BRAC University, 2006.[15] Md. Saiful Islam, Md. Abu Shahriar Ratul, Md. Yusuf Khan, “AnOpen Source Bengali Corpus”, Shahjalal University of Science andTechnology.[16] M. Hanumanthappa and . M. Swamy, “A Detailed Study on IndianLanguages Text Mining,” International Journal of Computer Science andMobile Computing, vol. 3, no. 11, pp. 54-60, November 2014.[17] Debasis Ganguly, Johannes Leveling, Gareth Jones, “Bengali (Bangla)information retrieval”, Technical Challenges and Design Issues in BanglaLanguage Processing. pp. 273-301, 2013.[18] Thorsten Joachims, “Text categorization with Support Vector Machines:learning with many relevant features”, In Proceedings of the 10thEuropean Conference on Machine Learning (ECML’98), 1998.[19] Hariani and Imama Riadi, “Detection of Cyberbullying On Social MediaUsing Data Mining Techniques,” International Journal of ComputerScience and Information Security, pp. 244-250, 2017.226</s>
|
<s>See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/343287756An Empirical Framework to Identify Authorship from Bengali Literary WorksChapter · July 2020DOI: 10.1007/978-3-030-52856-0_37CITATIONSREADS3 authors:Some of the authors of this publication are also working on these related projects:Text classification using deep learning, Emotion detection from text, handwritten sentence recognition using machine learning, Vision based driving assistance system ViewprojectB. Sc thesis View projectSumnoon Ibn Ahmad1 PUBLICATION 0 CITATIONS SEE PROFILELamia AlamChittagong University of Engineering & Technology14 PUBLICATIONS 39 CITATIONS SEE PROFILEMoshiul HoqueChittagong University of Engineering & Technology72 PUBLICATIONS 210 CITATIONS SEE PROFILEAll content following this page was uploaded by Moshiul Hoque on 04 August 2020.The user has requested enhancement of the downloaded file.https://www.researchgate.net/publication/343287756_An_Empirical_Framework_to_Identify_Authorship_from_Bengali_Literary_Works?enrichId=rgreq-77b52079d621615cf741620ab7fb449d-XXX&enrichSource=Y292ZXJQYWdlOzM0MzI4Nzc1NjtBUzo5MjA4NDUyNjIyNjYzNjlAMTU5NjU1ODAyNzY5NQ%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/343287756_An_Empirical_Framework_to_Identify_Authorship_from_Bengali_Literary_Works?enrichId=rgreq-77b52079d621615cf741620ab7fb449d-XXX&enrichSource=Y292ZXJQYWdlOzM0MzI4Nzc1NjtBUzo5MjA4NDUyNjIyNjYzNjlAMTU5NjU1ODAyNzY5NQ%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Text-classification-using-deep-learning-Emotion-detection-from-text-handwritten-sentence-recognition-using-machine-learning-Vision-based-driving-assistance-system?enrichId=rgreq-77b52079d621615cf741620ab7fb449d-XXX&enrichSource=Y292ZXJQYWdlOzM0MzI4Nzc1NjtBUzo5MjA4NDUyNjIyNjYzNjlAMTU5NjU1ODAyNzY5NQ%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/project/B-Sc-thesis?enrichId=rgreq-77b52079d621615cf741620ab7fb449d-XXX&enrichSource=Y292ZXJQYWdlOzM0MzI4Nzc1NjtBUzo5MjA4NDUyNjIyNjYzNjlAMTU5NjU1ODAyNzY5NQ%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-77b52079d621615cf741620ab7fb449d-XXX&enrichSource=Y292ZXJQYWdlOzM0MzI4Nzc1NjtBUzo5MjA4NDUyNjIyNjYzNjlAMTU5NjU1ODAyNzY5NQ%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Sumnoon_Ahmad2?enrichId=rgreq-77b52079d621615cf741620ab7fb449d-XXX&enrichSource=Y292ZXJQYWdlOzM0MzI4Nzc1NjtBUzo5MjA4NDUyNjIyNjYzNjlAMTU5NjU1ODAyNzY5NQ%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Sumnoon_Ahmad2?enrichId=rgreq-77b52079d621615cf741620ab7fb449d-XXX&enrichSource=Y292ZXJQYWdlOzM0MzI4Nzc1NjtBUzo5MjA4NDUyNjIyNjYzNjlAMTU5NjU1ODAyNzY5NQ%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Sumnoon_Ahmad2?enrichId=rgreq-77b52079d621615cf741620ab7fb449d-XXX&enrichSource=Y292ZXJQYWdlOzM0MzI4Nzc1NjtBUzo5MjA4NDUyNjIyNjYzNjlAMTU5NjU1ODAyNzY5NQ%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Lamia_Alam2?enrichId=rgreq-77b52079d621615cf741620ab7fb449d-XXX&enrichSource=Y292ZXJQYWdlOzM0MzI4Nzc1NjtBUzo5MjA4NDUyNjIyNjYzNjlAMTU5NjU1ODAyNzY5NQ%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Lamia_Alam2?enrichId=rgreq-77b52079d621615cf741620ab7fb449d-XXX&enrichSource=Y292ZXJQYWdlOzM0MzI4Nzc1NjtBUzo5MjA4NDUyNjIyNjYzNjlAMTU5NjU1ODAyNzY5NQ%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Chittagong_University_of_Engineering_Technology?enrichId=rgreq-77b52079d621615cf741620ab7fb449d-XXX&enrichSource=Y292ZXJQYWdlOzM0MzI4Nzc1NjtBUzo5MjA4NDUyNjIyNjYzNjlAMTU5NjU1ODAyNzY5NQ%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Lamia_Alam2?enrichId=rgreq-77b52079d621615cf741620ab7fb449d-XXX&enrichSource=Y292ZXJQYWdlOzM0MzI4Nzc1NjtBUzo5MjA4NDUyNjIyNjYzNjlAMTU5NjU1ODAyNzY5NQ%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Moshiul_Hoque?enrichId=rgreq-77b52079d621615cf741620ab7fb449d-XXX&enrichSource=Y292ZXJQYWdlOzM0MzI4Nzc1NjtBUzo5MjA4NDUyNjIyNjYzNjlAMTU5NjU1ODAyNzY5NQ%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Moshiul_Hoque?enrichId=rgreq-77b52079d621615cf741620ab7fb449d-XXX&enrichSource=Y292ZXJQYWdlOzM0MzI4Nzc1NjtBUzo5MjA4NDUyNjIyNjYzNjlAMTU5NjU1ODAyNzY5NQ%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Chittagong_University_of_Engineering_Technology?enrichId=rgreq-77b52079d621615cf741620ab7fb449d-XXX&enrichSource=Y292ZXJQYWdlOzM0MzI4Nzc1NjtBUzo5MjA4NDUyNjIyNjYzNjlAMTU5NjU1ODAyNzY5NQ%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Moshiul_Hoque?enrichId=rgreq-77b52079d621615cf741620ab7fb449d-XXX&enrichSource=Y292ZXJQYWdlOzM0MzI4Nzc1NjtBUzo5MjA4NDUyNjIyNjYzNjlAMTU5NjU1ODAyNzY5NQ%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Moshiul_Hoque?enrichId=rgreq-77b52079d621615cf741620ab7fb449d-XXX&enrichSource=Y292ZXJQYWdlOzM0MzI4Nzc1NjtBUzo5MjA4NDUyNjIyNjYzNjlAMTU5NjU1ODAyNzY5NQ%3D%3D&el=1_x_10&_esc=publicationCoverPdfAn Empirical Framework to Identify Authorshipfrom Bengali Literary WorksSumnoon Ibn Ahmad, Lamia Alam, and Mohammed Moshiul Hoque �Department of Computer Science and Engineering (CSE)Chittagong University of Engineering & Technology (CUET)Chattogram-4349, Bangladesh{sumnoon52, lamiacse09, mmoshiulh}@gmail.comAbstract. Authorship attribution is the process of identifying the prob-able author of an unknown document. This paper proposes a neuralnetwork based framework, which identifies the authorship from Bengaliliterary documents. For this purpose, a corpus consisting of 12,142 textdocuments of 23 writers/bloggers is built. A static dictionary is usedto count vectorization and important features are selected using infor-mation gain. The proposed system is trained with 9099 documents andtested with 3043 documents. The experimental result shows that neuralnetwork with n-gram and parts of speech (PoS) features achieved 94%accuracy on developed corpus.Keywords: Bangla language processing · Authorship attribution · Fea-ture extraction · Machine learning.1 IntroductionDue to the rapid growth in the use of internet and its effortless access via dig-ital devices a substantial contents are uploaded enormously and quickly on theweb as digital form. Also increasing popularity of text digitization and onlinedocumentation has made it very difficult to detect the authorship of a digitaltext. Therefore, automatic authorship detection or attribution has gained muchattention in recent years to identify the original author from a huge amount ofdigital contents. Authorship detection is conducted mainly to verify authorshipof a particular text. It is conducted by comparing works of other authors with theauthor in question. There are many application of authorship attribution suchas plagiarism detection, resolving ownership dispute of unknown text, forensiclinguistics etc. [14, 18, 6].Authorship detection is one of applied field of Natural Language Processing(NLP), which utilizes the stylometric approach to determine authorship of un-known text. The stylometric approach refers to the statistical approach to differ-entiate writing styles of different authors [9]. Most of the authors tend to followunique behavior in their text whether it is the use of certain word or collectionof words or sometimes it maybe certain style of writing. Stylometry helps in de-termining these behaviors of the author. In order to do that, multiple texts with2 S. I. Ahmad et al.known authors are used to extract stylometric features and using these featuresthe text of unknown author is compared with text of known authors.Although a substantial amount of works have been conducted on authorshipattribution in English and other European languages, no remarkable work hasbeen done yet on authorship attribution for text</s>
|
<s>written in Bengali language.The major barrier of performing research on authorship attribution in Bengalidue to the lack of linguistic resources in digital form and inadequate corpora.There are many well-known writers in Bengali literature and important prop-erties can be discovered from their writing variations. These properties can beuseful for literary, history, social and cultural studies respectively.An author usually follows an unique writing style or feature which may be uti-lized to identify authorship of a particular writing. Stylometry concerns thewriting style and it investigates the writing to find the specific pattern or charac-teristics of that writer. The major contributions of this work is that, we proposeda neural network based authorship identification system for Bengali texts usingfeature extraction method to extract n-gram and parts of speech (PoS) featuresto improve accuracy. In order to train and test our system we developed a Ben-gali text corpora including 23 authored texts which contains about 12,142 textsfiles. Also, we evaluated the proposed framework against two other algorithms-Random-forest and Support Vector Machine (SVM) are implemented and testedon our developed dataset. The experimental finding reveals that, the proposedneural-network based framework with PoS features achieved the higher accuracythan other algorithms in detecting authorship.2 Related WorkAutomatic identification of authorship is a long studied research issue for wellresourced languages like, English. However, it is in preliminary stage till nowwith respect to Bengali literature.A character-level CNN method was proposedin [15], which identifies authorship and achieved 96% accuracy for 6 authors and69% accuracy for 14 authors respectively . Marouf et al. proposed a techniquethat used BanglaMusicStylo dataset and gained 86.29% accuracy on 1470 Ben-gali songs of Rabindranath Tagore and Kazi Nazrul Islam [17, 11]. A hierarchicalclassifier based method was developed to detect authorship of unknown text [7].A neural network based approach was proposed, which achieved 85% accuracyfor 5 writers in Bengali languages. They used word length, and Wh words as fea-tures with small dataset [12]. Islam et al. is used n-grams, conjunction, pronounfeatures to detect authorship of 10 authors, which gained 96% accuracy [13].Hossain et al. used word frequency, modified word frequency, spelling of wordfeatures and gained 90.67% accuracy 6 Bangladeshi writers [10]. Chakraborty etal. investigated the ten-fold cross-validation and concluded that SVM is betterthan decision tree and neural network for small dataset [6]. Phani et al. [18]had devised a process with character bi-grams and tri-grams and word uni, biand tri-grams. They have also used a corpus of three thousands text from threeprominent Bengali authors. Instead of using literature of Bengali authors as cor-An Empirical Framework to Identify Authorship 3pus, Das et al. [8] have used text from four Bengali blog writers. They haveused different feature count, such as length of different word, sentence, num-ber of parts of speech used in sentence and number of words used in a certainposition of the sentence. Saha et al. used multi-layer perceptron to correctlyattribute short text to their authors using a twitter dataset of four authors and400 tweets for each author with accuracy of 96.44% [21].Most of the works stated above had very small dataset and less</s>
|
<s>variation inauthor categories or limited writing styles. In contrast to these, we developed aneural network based system for Bengali authorship attribution that is trainedand tested with larger dataset.3 Proposed MethodologyProposed authorship detection system is divided into two phases- training phaseand testing phase. At first, machine learning model has been trained using thetraining dataset and classification accuracy of the model is evaluated using thetesting dataset in testing phase. Around 75% of prepared dataset is used intraining and 25% is used in testing. As our primary dataset was raw and fullof noises, we have to perform some data cleaning and remove noises. Then, thenormalized data is used to extract features. After extracting the features, mostuseful features are selected using information gain (IG) value of the features.Then, final dataset was prepared and used to train classifier model. We used threeclassification algorithms and prepared four models. Neural network model wasprepared in two ways, in one model we does not use parts of speech features andin other model we have used parts of speech features. Then, we have comparedboth model with our test set which was unknown to our models during trainingperiod. A schematic representation of our proposed authorship detection systemis illustrated in Fig. 1.3.1 InputWe have collected text from 23 writers which includes various writing styles.We have used hold-out method for training and testing our model as it is verygood on large dataset and needs less computational power. For training, textof a certain writer was stored in a folder with his name and compressed fortraining set. For testing, a collection of text which was unknown to the modelduring training is used and the authorship detection was done with the help ofprevious knowledge and characterising the writing style of the text. Fig. 2 showsan sample of raw data.3.2 Pre-processing of Raw DataRaw data is not suitable for training purpose due to noises. Sometimes wordsfrom foreign languages are introduced into writings and these words help to de-tect authorship of particular text ( e.g. literature of Kazi Nazrul Islam used Urdu4 S. I. Ahmad et al.(a) Training Phase(b) Testing PhaseFig. 1: Proposed authorship detection systemFig. 2: Sample Textwords in his literature) or produces unwanted noise in other cases. Therefore,pre-processing is used to to reduce error rate. Each text can be divided intomultiple sentences. The end or beginning of a sentence is determined with punc-An Empirical Framework to Identify Authorship 5tuation. A collection of Bengali and English punctuation mark has been usedto decompose the text into sentence. After the decomposition, the punctuationare removed as they don’t have any significance. A dictionary of stop words areused to remove unrelated words from the text. A pre-processed text is shown inFig 3 as an example.Fig. 3: Pre-processed sample text3.3 Feature ExtractionN-gram and PoS features are used to observe in the corpus. We have verifiedwith uni-gram, bi-gram and tri-gram of word and found that more than tri-grambecause increased grams do not give any significant information about authors.The training set is tokenized into uni-grams. Then we combined them to createword bi-gram and tri-gram.</s>
|
<s>A PoS tagger is used to identify detects token fromeach sentence. This tagger takes text as input and detects each word from thetext and assign them with relative parts of speech. A modified PoS tagger is usedto tag other words outside of parts of speech. Due to lack of proper dynamic PoStagger in Bengali, we had to create our own static PoS tagger which can detectnouns, pronouns, adjectives, verbs, adverbs and conjunctions. The Pos taggerutilizes a dictionary of words which is consist of more than 50 conjunctions, 30pronouns, 23000 nouns, 1100 adjectives, 70000 verbs and 16000 adverbs. Con-junctions and Pronouns were collected from [5]. Frequency of each features iscalculated from the text, which is used to find the important features. Fig 4shows a set of sample features. With the help of word dictionary and n-gram ex-Fig. 4: Sample text after feature extractiontractor, a large number of feature words are found from the training data. These6 S. I. Ahmad et al.features can be reduced by the information gain (IG). IG is used to determinehow much information can be extracted using a feature and how important isthe feature to contribute in overall prediction system. The information gain (IG)is calculated by Eq. 1.IG(S, T ) = E(S) −t∈Tp(t) × E(t) (1)where, E(S) is defined as entropy which is the opposite of probability and it isdirectly related to the information gain in such a way that the more the entropyof an event the more information can be gained from that event. Entropy iscalculated by Eq. 2.E(S) = −x∈Sp(x) × log2 p(x) (2)3.4 Final dataset GenerationWith the help of information gain calculation and stop word dictionary we haveselected most important features from our primary dataset and removed unnec-essary words and stop words from the text and prepared our final dataset. Thefinal dataset is generated in .csv format.3.5 ClassifierA neural network model is trained with developed dataset [22]. The proposeneural network consist of three hidden layer with 128, 64 and 32 nodes in eachlayer. An activation function [20] is used to find the output from a node. In theproposed model, the rectified linear unit function a.k.a ReLU is used. Fig. 5illustrates the neural network model. The proposed neural network model haveused three hidden layer and one input and one output layer. In each hidden layernumber of nodes or neurons were 128, 64 and 32 respectively. Each neurons, alsoknown as perceptron[19], acts as a simple learner that takes one or multipleinputs and process them with a weight given on each of the input. Than itgenerates a binary decision. Using multiple similar neurons a layer of multi-layerperceptron is created. For weight optimization we have used Adam stochasticgradient-based optimization [16] and number of epoch was 3000.The training procedure consists of three major steps:– Step 1: Forward Pass In forward pass we run the sample vector frominput layer to output layer through multiple hidden layers. The input valueis multiplied with weight and a bias is added. Then the output is appliedthrough a activation function in our case ReLU function. Suppose,</s>
|
<s>w denotesthe vector of weights, x is the vector of inputs, b is the bias and φ is theactivation function, then for the ith neuron the output y would be given byEq. 3.y =i=1wixi + b = φ(wx + b) (3)An Empirical Framework to Identify Authorship 7Fig. 5: Multilayer perceptron model having three hidden layersActivation Function: It is used to determine the output of neural networklike yes or no. Depending on the function the value results from 0 to -1or -1 to 1 etc. In our system, we have used Rectified Linear Unit (ReLU)activation function, which is given by Eq. 4.φ(x) = max(0, x) (4)It is one of the most simplest and popular activation function. The biggestadvantage of ReLU is the non-saturation of its gradient, which improves theacceleration of Adam stochastic optimization more than any other activationfunction.– Step 2: Calculation of loss function After the forward pass we wouldget some output from our model which refers as predicted output. Usingpredicted output and real output we calculate loss that we need to propa-gate using back-propagation algorithm. We have used cross-entropy as lossfunction in our system. The calculation of loss function is performed by cal-culating loss for each label separately and then summing the result [Eq. 5].loss = −c=1yo,c log(p0,c) (5)where, M is the number of classes, y is the binary indicator (0 or 1) of clas-sification and p0,c is the predicted probability observation (o) of class c– Step 3: Backward Pass After calculation of loss function, we back-propagatethe loss and update the model by using gradient. In this step, weights would8 S. I. Ahmad et al.adjust according to the gradient flow in that direction. The process is re-peated until the final error is minimum.4 Experimental ResultsWe used corpus of 12,142 literary passages written in Bengali language. We chose8 eminent Bengali writer and 15 famous bloggers. For collecting data, we havescraped online websites and blog sites using custom web scraper and saved themin doc file. The proposed neural network model is experimented in two types ofdatasets: with PoS features and without PoS features. As writings of literaturewriters is not available in proper format we had to collect them from books,online portals [3, 1]. Moreover, some texts are collected manually. The data inlater stages was converted to .txt files and stored in folder of the respectiveauthor. In order to collect data from bloggers, we scraped writings of numerousbloggers from [2], [4]. Then some texts were left out due to lack of informationand volumes of text. Also, we have collected data from [5] which have a goodcollection of writings from various bloggers. Table 1 represents the summary ofdataset.Table 1: Data SummaryNumber of documents 12142Number of sentences(approx.) 607050Number of words(approx.) 1214100Total unique words(approx.) 29000In order to classify the texts, we have to feed our collected documents to ourclassifier model. Table 2 shows the summary of dataset used for our classificationprocess.Table 2: Data Summary for train and test phaseTraining TestingNumber of class 23 23Number of documents 9099 3043Average word per documents 50 52An Empirical Framework to</s>
|
<s>Identify Authorship 94.1 Evaluation MeasuresConfusion matrix is used to evaluate our model against test data. Confusionmatrix of proposed approach with parts of speech feature is shown in Fig. 6.Fig. 6: Confusion matrix of proposed approach with PoS featuresFig. 6 shows that 250 texts of Rabindranath Tagor, 250 text of Sarat Chandraand 250 text of Bankim Chandra are detected correctly. Precision, recall, F1 scoreand accuracy measures are used as per Eq. 6 - Eq. 9 respectively.Precision =TP + FP(6)Recall =TP + FN(7)F1 =2 ∗ precision ∗ recallprecision+ recall(8)Accuracy =TP + TNTP + TN + FP + FN(9)Here, TP, TN, FP and FN stands for true positive, true negative, false positiveand false negative respectively. In Table 3 shows the precision, recall and F1 andaccuracy of different classification algorithms used based on our dataset.10 S. I. Ahmad et al.Table 3: Comparison of ResultsPrecision Recall F1 AccuracyNeural Network 0.92 0.90 0.91 91.62(without PoS feature)Neural Network 0.94 0.94 0.94 94.25(with PoS feature)Fig. 7: Sample input-output4.2 Sample Input and OutputFig. 7 shows the sample input and corresponding output as examples. Firstexample indicates the incorrect prediction of author and second shows the correctprediction of author. The reason behind the incorrect prediction is that certaintext of the author Sri Sri Ramakrishna and the author Jibonanada are almostsimilar and frequency of PoS features are also similar.4.3 Comparison with Existing TechniquesIn order to measure the effectiveness, we compare the proposed method with theavailable techniques. Table 4 shows the summary of the comparison.Table 4 reveals that the proposed system has performed very well comparedto other systems. The previous approaches used their own dataset. A recentmethod proposed by Khatun et al. achieved the higher accuracy (96%) thanothers [15]. However, they used only 6600 text documents written by 6 authors.Another method [13] also found the 96% accuracy for 10 authors with verysmall text documents (only 3125). Accuracy may vary due to the writing styles.Therefore, accuracy may comes naturally higher for small dataset and limitedAn Empirical Framework to Identify Authorship 11Table 4: Comparison with previous approachesTotal Authors No. of documents Accuracy (%)Khatun et al. [15] 6 6600 96Chowdhury et al. [7] 6 2,400 92.9Islam et al. [12] 5 1,973 85Islam et al. [13] 10 3,125 96Hossain et al. [10] 6 2,764 90.5Proposed System 23 12,142 94.12number of authors due to less variation of writing styles. The proposed systemconsidered the larger number of text documents (12,142) and authors (23) thanthe existing approaches. The system achieved a reasonably good accuracy whichamount to 94% in terms of number of documents and authors.5 ConclusionThis paper introduced a neural network based approach for identifying author-ship from Bengali literary or blog texts. The proposed system can identify au-thorship of 23 authors in Bengali literature. To build the framework a self-developed dataset is used for training and testing with 12,142 text documents.The neural network approach with n-gram and parts of speech features pro-vided the better accuracy than the existing techniques. The proposed system isnot tested with standard dataset and not validate with the standard techniquewhich are the main limitations of the</s>
|
<s>system. The accuracy may be improvedwith more label data. K-fold cross validation technique may be used for trainingphases for better training accuracy. These are left as future issues.References1. Ebanglalibraray, https://www.ebanglalibrary.com2. Sachalayatan, https://en.sachalayatan.com3. Society for natural language technology research, https://nltr.org/index.php4. Somewhere in blog, https://www.somewhereinblog.net5. Stylogenetics, https://github.com/olee12/Stylogenetics6. Chakraborty, T.: Authorship identification in bengali literature: a comparativeanalysis. CoRR abs/1208.6268 (2012), http://arxiv.org/abs/1208.62687. Chowdhury, H.A., Imon, M.A.H., Islam, M.S.: Authorship attribution in bengaliliterature using fasttext’s hierarchical classifier. In: 2018 4th International Con-ference on Electrical Engineering and Information & Communication Technology(iCEEiCT). pp. 102–106. IEEE (2018)8. Das, P., Tasmim, R., Ismail, S.: An experimental study of stylometry in banglaliterature. In: 2015 2nd International Conference on Electrical Information andCommunication Technologies (EICT). pp. 575–580. IEEE (2015)12 S. I. Ahmad et al.9. Holmes, D.I.: The evolution of stylometry in humanities scholarship. Literary andlinguistic computing 13(3), 111–117 (1998)10. Hossain, M.T., Rahman, M.M., Ismail, S., Islam, M.S.: A stylometric analysis onbengali literature for authorship attribution. In: 2017 20th International Confer-ence of Computer and Information Technology (ICCIT). pp. 1–5. IEEE (2017)11. Hossain, R., Al Marouf, A.: Banglamusicstylo: A stylometric dataset of banglamusic lyrics. In: 2018 International Conference on Bangla Speech and LanguageProcessing (ICBSLP). pp. 1–5 (2018)12. Islam, M.A., Kabir, M.M., Islam, M.S., Tasnim, A.: Authorship attribution onbengali literature using stylometric features and neural network. In: 2018 4th Inter-national Conference on Electrical Engineering and Information & CommunicationTechnology (iCEEiCT). pp. 360–363. IEEE (2018)13. Islam, N., Hoque, M.M., Hossain, M.R.: Automatic authorship detection from ben-gali text using stylometric approach. In: 2017 20th International Conference ofComputer and Information Technology (ICCIT). pp. 1–6. IEEE (2017)14. Juola, P.: Rowling and galbraith: an authorial analysis. Language Blog (2013)15. Khatun, A., Rahman, A., Islam, M.S., Marium-E-Jannat: Authorship attributionin bangla literature using character-level cnn. arXiv preprint arXiv:2001.05316(2020)16. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 (2014)17. Marouf, A., Hossain, R.: Lyricist identification using stylometric features utilizingbanglamusicstylo dataset. In: 2nd International Conference on Bangla Speech andLanguage Processing (ICBSLP2019) (2019)18. Phani, S., Lahiri, S., Biswas, A.: A supervised learning approach for authorshipattribution of bengali literary texts. ACM Transactions on Asian and Low-ResourceLanguage Information Processing (TALLIP) 16(4), 28 (2017)19. Rosenblatt, F.: The perceptron: a probabilistic model for information storage andorganization in the brain. Psychological review 65(6), 386 (1958)20. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning internal representationsby error propagation. Tech. rep., California Univ San Diego La Jolla Inst for Cog-nitive Science (1985)21. Saha, N., Das, P., Saha, H.N.: Authorship attribution of short texts using multi-layer perceptron. International Journal of Applied Pattern Recognition 5(3), 251–259 (2018)22. Wilson, E., Tufts, D.W.: Multilayer perceptron design algorithm. In: Proceedingsof IEEE Workshop on Neural Networks for Signal Processing. pp. 61–68. IEEE(1994)View publication statsView publication statshttps://www.researchgate.net/publication/343287756</s>
|
<s>Readability Classification of Bangla TextsZahurul Islam, Md. Rashedur Rahman, and Alexander MehlerWG Text-TechnologyComputer ScienceGoethe-University Frankfurt{zahurul,mehler}@em.uni-frankfurt.de, kamol.sustcse@gmail.comAbstract. Readability classification is an important application of Natural Lan-guage Processing. It aims at judging the quality of documents and to assist writ-ers to identify possible problems. This paper presents a readability classifier forBangla textbooks using information-theoretic and lexical features. All together18 features are explored to achieve an F-score of 86.46%. The paper is an exten-sion of our previous work [1].Keywords: Bangla, text readability, information-theoretic features.1 IntroductionReadability classification aims at measuring how well and easy a text can be read andunderstood [2]. It deals with mapping texts onto degrees of readability. Thus, readabilityclassification can be reconstructed as a sort of automatic text categorization [3]. Variousfactors influence the readability of a text including simple features such as type face,font size and text vocabulary as well as more complex features relating to the syntax,semantics, or rhetorical structure of a text [1].Professionals, such as teachers, journalists, or editors, produce texts for specific au-diences. They need to check the readability of their output. Readability classifiers arealso used as a means of pre-processing in the framework of natural language processing(NLP) [1].A lot of research on readability classification exists for English [4–9], German [10],French [11], Japanese [12] and Chinese [13]. All these languages are considered ashigh-resourced languages. They are contrasted with low-resourced languages which arespoken by members of a small community or for which only few resources (corpora,tools etc.) exist [14]. Bangla is a low-resourced language in the latter sense. As an Indo-Aryan language it is spoken in Southeast Asia, specifically in present day Bangladeshand the Indian states of West Bengal, Assam, Tripura and Andaman and on the NicobarIslands. With nearly 250 million speakers [15], Bangla is spoken by a large speechcommunity. Nevertheless, it is low-resourced because of the lack of appropriate corporaand tools. Thus, though many texts are produced in Bangla everyday, authors can hardlymeasure their readability due to the lack of appropriate readability classifiers.Recently, some approaches addressed the readability of Bangla text. Das and Roy-chudhury [16, 17] experimented with two classical readability measures for English ndapplied them to Bangla texts. Sinha et al. [18] proposed two alternative readability mea-sures for Bangla. Islam et al. [1] built a readability classifier using a corpus of BanglaA. Gelbukh (Ed.): CICLing 2014, Part II, LNCS 8404, pp. 507–518, 2014.c© Springer-Verlag Berlin Heidelberg 2014508 Z. Islam, M.R. Rahman, and A. Mehlertextbooks. Although the classifier achieves an F-score of 72.10%, classifiers that pro-duce better F-scores are still required. In this paper, we provide such a better performingreadability classifier for Bangla. This is done by example of an extended version of thecorpus used in [1]. The corpus is extracted from textbooks used in consecutive gradesof the school system of Bangladesh.Syntactic, semantic and discourse related features are now broadly explored forbuilding readability classifiers for high-resourced languages. Obviously, it is a chal-lenge to do the same for low-resourced languages that lack preprocessing tools. Thus,in this paper, we explore lexical and information-theoretic features which do not require(much) linguistic preprocessing.The</s>
|
<s>paper is organized as follows: Section 2 discusses related work followed by adescription of the underlying corpus (Section 3). The operative readability features aredescribed in Section 4. An experiment based on these features is the topic of Section 5.Its results are discussed in Section 6. Finally, a conclusion is given in Section 7.2 Related WorkSince the early twentieth century, researchers proposed different readability measuresfor English [4–9]. All of them explore simple surface-structural features such as averagesentence length (ASL), average word length (AWL) and average number of syllables ina word. Many commercial readability tools use these classical measures. Fitzsimmonset al. [19] stated that the SMOG [9] readability measure should be preferred to assessthe readability of texts on health care.Petersen & Ostendorf [20] and Feng et al. [21] show that the classical models havesignificant drawbacks. Due to recent achievements in linguistic data processing, modelsof linguistic features are now in the focus of readability studies. [1] summarizes relatedwork regarding language model-based features [22–26], PoS-related features [21, 24,27, 28], syntactic features [27, 29–32], and semantic features [21, 32].Recently, Hancke et al. [10] measured the readability of German texts using lexicaland syntactic features in conjunction with language models. According to their findings,morphological features influence the readability of German texts. Vajjala and Meurers[33] used lexical features from the field of Second Language Acquisition (SLA). In ourstudy, we use type token ration (TTR) related readability measures as studied by Vajjalaand Meurers [33].Only few approaches consider the readability of Bangla texts. Das and Roychudhury[16, 17] show that readability measures proposed by Kincaid et al. [7] and Gunning [6]work well for Bangla. However, the measures were tested only for seven documents,mostly novels.In our previous study [1], we proposed a readability classifier for Bangla usingentropy and relative entropy-based features. We achieved an F-Score of 72.10% bycombining these features with lexical ones. Recently, Sinha et al. [18] proposed tworeadability measures that are similar to classical readability measures for English. Theyconducted a user experiment to identify important structural parameters of Bangla texts.</s>
|
<s>A Supervised Framework for ClassifyingDependency Relations from Bengali ShallowParsed SentencesAnupam Mondal and Dipankar Das(&)Computer Science and Engineering, Jadavpur University, Kolkata, Indialink.anupam@gmail.com, ddas@cse.jdvu.ac.inAbstract. Natural Language Processing, one of the contemporary research areahas adopted parsing technologies for various languages across the world fordifferent objectives. In the present task, a new approach has been introduced forclassifying the dependency parsed relations for a morphologically rich andfree-phrase-ordered Indian language like Bengali. The pair of dependencyparsed relations (also referred as kaarakas ‘cases’) are classified based on dif-ferent features like vibhaktis (inflections), Part-of-Speech (POS), punctuation,gender, number and post-position. It is observed that the consecutive andnon-consecutive occurrences of such relations play a vital role in the classifi-cation. We employed three different machine-learning classifiers, namely Nai-veBayes, Sequential Minimal Optimization (SMO) and Conditional RandomField (CRF) which obtained the average F-Scores of 0.895, 0.869 and 0.697,respectively for classifying relation pairs of three primary kaarakas and oneprimary vibhakti relation. We have also conducted the error analysis for suchprimary relations using confusion matrices.Keywords: Dependency relations � Kaaraka � Vibhakti � Machine-learningclassifiers1 IntroductionDependency Parsing, a challenging task for processing any natural language seems anobvious milestone while dealing with morphologically rich and free-phrase orderedlanguages, especially Indian Languages. Bengali, the seventh popular language1 in theWorld, second in India and the national language of Bangladesh is morphologicallyrich and resource constrained. Thus, to the best of our knowledge, at present, there is nosuch full-fledged parser available in Indian languages and especially for Bengali.However, Bengali is one of the important Indo-Iranian languages spoken by a popu-lation that now exceeds 211 million or 3.11 % of the world population. Geographically,Bengali-speaking population percentages2 are as follows: Bangladesh (over 95 %),Indian States of Andaman and Nicobar Islands (26 %), Assam (28 %), Tripura (67 %),and West Bengal (85 %). The development of parsers for Indian languages in general1 http://listverse.com/2008/06/26/top-10-most-spoken-languages-in-the-world/.2 https://en.wikipedia.org/wiki/List_of_languages_by_number_of_native_speakers.© Springer International Publishing Switzerland 2015R. Prasath et al. (Eds.): MIKE 2015, LNAI 9468, pp. 597–606, 2015.DOI: 10.1007/978-3-319-26832-3_56http://listverse.com/2008/06/26/top-10-most-spoken-languages-in-the-world/https://en.wikipedia.org/wiki/List_of_languages_by_number_of_native_speakersand Bengali in particular is difficult and challenging as the language is (1) inflectionallanguage providing the richest and most challenging sets of linguistic and statisticalfeatures resulting in long and complex word forms, and (2) relatively free phrase orderand less computerized compared to English [2].Till date, due to the scarcity of reliable annotated data, it is observed that severalattempts that were used to develop the parsers for Indian languages mainly depend onlinguistic rules [2, 13, 14, 17, 18]. A hybrid dependency parser that proposed two stageparsing system [1] and a data driven parser that identifies the dependency relationsbetween chunks in a sentence using Treebank are found in the literature. The respectiveresearchers conducted their experiments to improve the mistakes of the data drivenparser based on the effects of case frames. However, none of the approaches hasconsidered the classification of the relation pairs using machine learning approaches.Therefore, the present task aims to identify the chunks and phrases and their intrarelationships from sentences using a data-driven approach. In addition, we have alsoclassified the dependency relations that are considered as the prerequisites towardsdeveloping a full-fledged parser. It is observed that different</s>
|
<s>relations like kaaraka andvibhakti use to play an important role for constructing the sentences. In Bengaligrammar, kaaraka is the relationship between verb and noun or verb and pronoun in asentence. There are seven different kaaraka relations such as kartaa, karma, karana,sampradana, apadana, nimito, adhikarana that are represented in the paper as K1, K2,K3, K4, K5, K6 and K7, respectively. Here, we have dealt with only three kaarakarelations e.g., kartaa (K1), karma (K2) and adhikarana (K7) as per the frequency. Theexamples of the kaaraka have been illustrated in Fig. 1.In case of making the Bengali sentences, vibhakti plays an important role. There arepresently ten symbols that are considered for indicating the vibhaktis in Bengali viz. ay,ke, re, te, sunyo etc. and are represented as R1, R2, R3, …, R10, respectively in theFig. 1. Kaaraka examples with Illustration598 A. Mondal and D. Dasannotated corpus [12]. It also provides the information related to the respectivekaarakas as shown in Fig. 2. Therefore, in order to deal with such relations using amachine learning framework, we always need to extract the linguistic features at dif-ferent levels of granularities (word, chunk and or sentence).The first obvious question was how to select the important kaarakas in order toidentify the dependency parsed relations. To start with the top four frequent depen-dency relations (K1, K2, K7 and R6), we have initiated the inclusion of associatedfeatures viz. Part-of-Speech (POS), punctuations, number, gender and post- position forimplementing the machine learning framework. An exhaustive error analysis withrespect to different classifiers and mode of operations were performed to achieve themaximum F-Scores of 52 %, 42 %, 45 % and 69 % for K1, K2, K7 and R6 dependencyrelations, respectively.The rest of the Sections are as follows. In Sect. 2, we have discussed the relatedattempts made in developing the parsing technologies for Bengali and other Indianlanguages. Preprocessing of the corpus and selection of top-frequent relations arediscussed under Sect. 3. In the next Section, we have mentioned the methodologies toextract the related features which were supplied with the relations. The systemframework for classifying the dependency relations is discussed in Sect. 5 while inSect. 6 we have analyzed the errors in terms of the confusion matrices. Finally, Sect. 7concludes the task and mentioned the possibilities of future scopes.Fig. 2. Vibhakti examples with IllustrationA Supervised Framework for Classifying Dependency Relations 5992 Related WorkIn literature survey, we have found the development of a predictive parser in anefficient way for morphological rich and free word order languages viz. Bengali [3, 15].The identification of structured Bengali sentences purposes the symbols (constituents)based on Context Free Grammar (CFG) rules. The recognition of Bengali grammarfrom the sentences was a contributory attempt of this task for the reason of availabilityof different grammars. In contrast, the grammar driven parser was developed forBengali language which achieved a score near to 90 % [4] in a Shared Task3. A groupof researchers were trying to generate a new dataset for reducing the gap betweenstructured and unstructured form of data with the help of Treebank’s which containsapproximately 1500 sentences. They</s>
|
<s>had not used the developed dataset for extractinglinguistics rules for the task. A comparative analysis had done between grammar drivenand data driven approaches of Bengali language for developing a dependency parser ofBengali language [10, 11].Lexical Functional Grammar (LFG) [6] based linguistic phenomena has beenapplied in a wide range for Bengali language. The Constituent phrase Structure(C-structure) and Functional structure (F-structure) is considered as primary featuresfor LFG technique.In concern, the Paninian model and dependency based framework were introducedas an effective technique for parsing the Bengali sentences [5]. The researchers weretaking help of demand-source concept under Paninian grammar with six different typesof kaaraka and verb. The kaaraka and verb group of words are treated as source anddemand groups, respectively for the task where the dependency tree root indicates asverb along with appropriate kaaraka labels. Several researchers had analyzed thedependency parsers [7, 18] for Indian languages and remarked that the development ofdependency parsers can be carried out either using grammar driven approaches or datadriven approaches [16]. In case of morphologically rich and free word order languages,the grammar driven approach is difficult than the data driven approach. In several cases,Malt parser4 has been used as transition based approach for dependency parsing and itmainly consists of the transition and classifier based on prediction approaches.In this report, we have introduced the dependency relations (kaaraka, Vibhakti)based classification approaches with several features viz. POS, punctuation, numberetc. for developing a Bengali dependency parser.3 Resource Preparation3.1 CorpusIn order to develop a dependency parser for any language, we need to identify thelinguistic rules that guide us how to relate different chunks of a sentence using thegrammar of that language. The effect of morphological richness and free phrase ordered3 http://ltrc.iiit.ac.in/mtpil2012/.4 www.maltparser.org.600 A. Mondal and D. Dashttp://ltrc.iiit.ac.in/mtpil2012/http://www.maltparser.orgstructure make the rule identification difficult. Therefore, in the present task, we havemainly tried to design the language dependent rules without considering the structure ofthe sentences in the input corpus. We have observed that in Bengali language, the wordsof a sentence appear in the form of any of the seven kaarakas and ten vibhaktis as per theguidelines [8]. The dependency relation (drel) tag of a sentence has shown in Fig. 3. Wehave adopted two different techniques based on consecutive and non-consecutiveoccurrences of such primary dependency relations, as described in the next section.3.2 Selection of Consecutive and Non-consecutive OccurrencesIn order to derive the classification features, we have evaluated the occurrence prob-abilities of the dependency relations from the corpus in terms of consecutive andnon-consecutive appearances. In case of consecutive dependency relation, we haveconsidered the dependency relations of neighboring words where as two or three wordgap was considered for identifying dependency relations in case of non-consecutiveoccurrences. We have observed that, in case of morphologically rich and free phraseordered language, the occurrence probability of the dependency relation (kaaraka) ishigh in case of consecutive words. Similarly, the non-consecutive presence of thedependency relations also plays a crucial role. The non-consecutive appearances help toidentify the implicit co-reference exists among the long-distanced words in order todevelop a full-fledged dependency parser. Table 1 has illustrated the dependencyrelations</s>
|
<s>of consecutive and non-consecutive appearances of the words whereas Fig. 4shows the steps to identify consecutive dependency relations and similar steps we haveconsidered for identifying the non-consecutive relations.In order to implement any data-driven model, we need to analyze the data based ondifferent statistics prior to start applying the supervised algorithms. In the presentreport, the whole corpus was collected from the articles published in newspapers, textbooks by a group of members of IIIT-H and annotated with different relations based onkaarakas (e.g. K1, K2, K7) and vibhaktis (e.g. R1, R2, R6) [9]. The corpus wasprovided by IIIT-H in a shared task challenge5 in order to build the shallow parser forBengali. We split the corpus in three different sets, namely training, development andtest randomly with a distribution of 50 %, 20 % and 30 %, respectively. The importantdistributions of the sentential relations, POS tags and their combinations in these threesets are mentioned in Table 2 with their corresponding distributions. Therefore, weattempted to identify other features that are available from the annotated corpus.<af=e, drel=nmod: NP2/name=NP><af=keu, drel=k1: VGF/name=NP2><af=biRayZa, drel=k7: VGF/name=NP3><af=AgrahI, drel=k1s: VGF/name=JJP>Fig. 3. A sample of dependency relations of words based on Shakti Standard Format (SSF)5 http://shiva.iiit.ac.in/SPSAL2007/.A Supervised Framework for Classifying Dependency Relations 601http://shiva.iiit.ac.in/SPSAL2007/4 Feature ExtractionWhile analyzing the training data with 700 sentences, we have found that a total 2329instances are present in the top-4 relations (K1, K2, K7 and R6). These relations areappeared with an average of 3.3 relational instances per sentence and therefore con-sidered as our key instances. The distributions of such top-4 relations are mentioned inTable 1. Important dependency relation combinations for consecutive (non-consecutive) wordsImportant relation combinations Training Development TestK1-K2 139 (376) 20 (53) 44 (106)K2-k2 126 (274) 7 (22) 83 (165)R6-K1 121 (234) 11 (14) 77 (182)and ccof }Step2: Extract the dependency relation from each of the words of a sentence and store in a list called L.Step3: Pick up two consecutive relations, Ri and Ri+1 from the list L.Step3.1: If both Ri and Ri+1 belong to DRL, take the relation-pair Ri and Ri+1 as our candidate pair.Step3.2: Else move to next relation pair, Ri+1 and Ri+2.Step4: Repeat until the list L is exhausted.Fig. 4. Steps for identifying the consecutive dependency relationsTable 2. Important primary relations, their POS tags and combinations in Training,Development and Test Data SetsTraining Development TestWords 2329 660 1854Sentences 700 150 280Top 4 relationsK1 734 174 252K2 756 101 386K7 395 91 273R6 446 58 297POS tagsNoun 795 165 569Pronoun 384 61 77Unk 709 76 332POS tag combinationsAdverb-Noun 869 175 623Noun-Pronoun 1179 226 673602 A. Mondal and D. DasTable 3. After an initial investigation on the training, test and development data sets,we extracted five features (POS, punctuation, Gender, Number and post-position) thatplay important roles in distinguishably identifying the top-4 primary relations. The POStag feature produces remarkable output for identifying K1, K2 and K7 relations.Mainly, the adjective, adverb, noun, verb and WQ tags are notable for identifying therelation pairs of K1-K2 and K2-K2 pairs. In case of gender feature, K2 mainly appearsas singular whereas K1 represents the plural. The</s>
|
<s>above derived observations playedthe vital roles for designing the dependency parser for Bengali language.5 System FrameworkWe have used the Weka6 tool and employed two different classifiers viz. NaiveBayesand SMO for classifying the relations. Along with the extracted features described inthe previous section, we also included the consecutive and non-consecutive occur-rences of the relation pairs and their POS tag combinations as features for developingthe classification framework. It is observed that the inclusion of the features related tothe consecutiveness improves the accuracy of the system as illustrated in Table 4. Wehave adopted four different modes of operations namely Use training set, Supplied testset, Cross validation Folds-10 and Percentage split 66 % on each of the classifiers inWeka toolkit. NaiveBayes classifier produced the remarkable accuracy (70 %), averageprecision, recall and F-measure with top 6 features and all features with respect to alltypes of operations. Similarly, in case of SMO classifier, the accuracy (75 %), averageprecision, recall and F-measure are notable with top 5, top 6 and all features set withrespect to all types of operations.In addition to different classifiers in Weka, we also used Conditional Random Field(CRF7) for classifying the primary dependency relations. The precision and F-measurewith respect to top 4 features are high for K1 whereas recall is low with top 6 featuresfor relation K2. In case of identifying the R6 relation using CRF, the precision, recalland F-score are notable with top 5, top 6 and all features. The detail observations ofprecision, recall and F-Score for all primary relations (K1, K2, K7 and R6) along withsecondary relations (SR) is mentioned in Table 5.Table 3. Important Feature analysis for (Training/Development/Test) DatasetsRelations on resource($T/$D/$Te)POS (f1) Punc (f2) Gender (f3) Number (f4) Post position (f5)Adj Adv Noun unk Sg Pl 4 5 a D OK1 (734/174/252) 47/44/7 22/1/5 211/61/128 265/34/79 323/77/140 37/10/15 6/0/3 0/0/0 0/0/0 359/87/154 4/0/1K2 (756/101/386) 80/5/93 15/4/13 295/41/179 206/18/73 346/44/177 14/4/10 7/1/3 21/6/6 3/2/0 348/46/183 12/2/4K7 (395/91/273) 36/6/20 25/5/3 157/39/153 81/8/58 148/39/152 0/0/3 0/1/0 1/2/1 4/0/3 195/50/164 0/1/2R6 (443/58/297) 9/2/2 12/0/6 141/24/136 157/16/121 221/31/141 32/6/14 0/0/0 0/0/0 6/0/11 3/0/1 250/37/153Total (2329/660/1854) 172/57/122 74/10/27 795/165/596 709/76/331 1038/191/610 83/20/42 13/2/6 22/8/7 13/4/14 905/183/502 266/40/160$T → Training $D → Development $Te → Test Punc (f2) → PunctuationAdj → Adjective Sg → Singular Adv → Adverb Pl → Plural a → any number6 www.cs.waikato.ac.nz/ml/weka.7 nlp.stanford.edu/software/CRF-NER.shtml.A Supervised Framework for Classifying Dependency Relations 603http://www.cs.waikato.ac.nz/ml/wekahttp://nlp.stanford.edu/software/CRF-NER.shtmlTable 4. System generated results with important mode of operation for different classifiersA B C D E FNaiveBayes classifierCross-validation Folds-10 [661] 9# 498 75.34 0.80 0.7538$ 496 75.04 0.79 0.7507@ 439 66.41 0.64 0.6646** 412 62.33 0.60 0.6235*** 399 60.36 0.57 0.604SMO classifierUse training set [661] 9# 577 87.29 0.89 0.8738$ 575 86.99 0.88 0.8707@ 558 84.42 0.86 0.8446** 524 79.27 0.80 0.7935*** 524 79.27 0.80 0.793# all features $ top 6-features @ top 5-features** top 4-features *** top 3-featuresA → Important Mode of Operation [No. of Instances]B → No. of Attributes (No. of features)C → No. of Correctly Classified InstancesD → Avg. Precision E → Avg. Recall F → Avg. F-MeasureTable 5. System generated</s>
|
<s>important results based on CRF toolDependencyrelation with no.of occurrencePrecision Recall F-ScoreK1 (734) All 7 0.431 0.6 0.5Top 5 0.434 0.7 0.5Top 6 0.478 0.7 0.6K2 (756) All 7 0.518 0.3 0.4Top 5 0.522 0.3 0.4Top 6 0.444 0.4 0.4K7 (396) All 7 0.570 0.3 0.4Top 5 0.588 0.4 0.4Top 6 0.696 0.3 0.4R6 (443) All 7 0.713 0.7 0.7Top 5 0.714 0.7 0.7Top 6 0.673 0.6 0.6SR (1986) All 7 0.976 1 0.9Top 5 0.978 1 0.9Top 6 0.975 1 0.9604 A. Mondal and D. Das6 Error AnalysisWe have also conducted an error analysis based on the confusion matrices for theclassified dependency relations in the form of graphical representation. Figure 5 showsthe occurrences of the relations (K1, K2, K7, R6 and SR) in the confusion matrices fordifferent classifiers, NaiveBayes, SMO and CRF respectively with respect to all fea-tures. However, we have observed that the occurrences are high when K1 relationappears as K2, K2 appears as K1 or K7, K7 appears as K1 and R6 as SR (Secondaryrelations) relation for their important modes of operation for all classifiers.7 Conclusion and Future WorkIn this paper, we have introduced the approaches for classifying the dependency parsedrelations based on kaarakas and vibhaktis for the morphologically rich and free-wordorder language, Bengali. The consecutive and non-consecutive techniques have beenused for identifying the important dependency relations from the sentences. Thedependency relations based on chunks or phrases also gives satisfactory output.Finally, we prepared a machine-learning framework for classifying the dependencyrelations followed by an exhaustive error analysis that shows crucial insights towardsdeveloping a full-fledged parser.In future, we will include the semantic relationships for extracting the suitablechunks from the sentences which can guide to develop the full-fledged dependencyparser in efficient manner.References1. Dhar, A., Chatterji, S., Sarkar, S., Basu, S.: A hybrid dependency parser for Bangla. In:Proceedings of the 10th Workshop on Asian Language Resources, COLING Mumbai,pp. 55–64, India (2012)Fig. 5. Confusion matrix for different classifiers w.r.t all features of important modesA Supervised Framework for Classifying Dependency Relations 6052. Ghosh, A., Bhaskar, P., Das, A., Bandyopadhyay, S.: Dependency parser for Bengali. In: JUSystem at ICON (2009)3. Chatterji, S., Sonare, P., Sarkar, S., Roy, D.: Grammar driven rules for hybrid Bengalidependency parsing. In: Proceedings of ICON 2009 NLP Tools Contest: Indian LanguageDependency Parsing, Hyderabad, India (2009)4. Das, A., Shee, A., Garain, U.: Evaluation of two Bengali dependency parsers. In:Proceedings of the Workshop on Machine Translation and Parsing in Indian Languages(MTPIL), COLING, pp. 133–142 (2012)5. Garain, U., De. S.: Dependency Parsing in Bangla. IGI Global (2013)6. Haque, M.N., Khan, M.: Parsing Bangla using LFG. In: Proceedings of Association forComputational Linguistic (1997)7. Kosaraju, P., Kesidi, S.R., Ainavolu, V.B.R., Kukkadapu, P.: Experiments on Indianlanguage dependency parsing. In: Proceedings of ICON (2010)8. Bharati, A., Sangal, R., Sharma, D.M.: SSF: Shakti Standard Format Guide (2007)9. Das, D., Choudhury, M.: Chunker and shallow parser for free word order languages: anapproach based on valency theory and feature structures. In: Proceedings of ICON (2004)10. Begum, R., Husain, S., Sharma, D.M., Bai, L.: Developing verb frames in Hindi. In:Proceedings of the</s>
|
<s>Sixth International Conference on Language Resources and Evaluation(LREC), Marrakech, Morocco (2008)11. Chatterji, S., Sarkar, T.M., Sarkar, S., Chakrabory, J.: Kaaraka relations in Bengali. In:Proceedings of 31st All-India Conference of Linguists (AICL), Hyderabad, pp. 33–36, India(2009)12. Bharati, R., Sangal, D.M., Bai, L.: AnnCorra: annotating corpora guidelines for POS andchunk annotation for Indian languages. Technical report (TR-LTRC-31), LTRC, IIITHyderabad, India (2006)13. Ghosh, A., Das, A., Bhaskar, P., Bandyopadhyay, S.: Bengali parsing system. In:ICON NLP Tool Contest (2010)14. Rao, P.R.K., Vijay, S.R.R., Vijaykrishna, R., Sobha, L.: A text chunker and hybrid POStagger for Indian languages. In: Proceedings of IJCAI Workshop on Shallow Parsing forSouth Asian Languages (2007)15. De, S., Dhar, A., Garain, U.: Structure simplification and demand satisfaction approach todependency parsing in Bangla. In: Proceedings of ICON 2009 NLP Tools Contest: IndianLanguage Dependency Parsing, Hyderabad, India (2009)16. Bandyopadhyay, S., Ekbal, A., Halder, D.: HMM based POS tagger and rule-based chunkerfor Bengali. In: Proceedings of NLPAI Machine Learning Workshop on Part of Speech andChunking for Indian Languages (2006)17. Das, D., Ekbal, A., Bandyopadhyay, S.: Acquiring verb subcategorization frames in Bengalifrom corpora. In: Li, W., Mollá-Aliod, D. (eds.) ICCPOL 2009. LNCS, vol. 5459, pp. 386–393. Springer, Heidelberg (2009)18. Begum, R., Husain, S., Dhwaj, A., Sharma, D.M., Bai, L., Sangal, R.: Dependencyannotation scheme for Indian Languages. In: Proceedings of the Third International JointConference on Natural Language Processing (IJCNLP), Hyderabad, India (2008)606 A. Mondal and D. Das A Supervised Framework for Classifying Dependency Relations from Bengali Shallow Parsed Sentences Abstract 1 Introduction 2 Related Work 3 Resource Preparation 3.1 Corpus 3.2 Selection of Consecutive and Non-consecutive Occurrences 4 Feature Extraction 5 System Framework 6 Error Analysis 7 Conclusion and Future Work References</s>
|
<s>Microsoft Word - F1International Journal of Control Theory and Computer Modeling (IJCTCM) Vol.5, No.1, January 2015 DOI : 10.5121/ijctcm.2015.5101 1 AUTOMATIC CLASSIFICATION OF BENGALI SENTENCES BASED ON SENSE DEFINITIONS PRESENT IN BENGALI WORDNET Alok Ranjan Pal, Diganta Saha and Niladri Sekhar DashDept. of Computer Science and Eng., College of Engineering and Management, Kolaghat Dept. of Computer Science and Eng., Jadavpur University, Kolkata Linguistic Research Unit, Indian Statistical Institute, Kolkata ABSTRACT Based on the sense definition of words available in the Bengali WordNet, an attempt is made to classify the Bengali sentences automatically into different groups in accordance with their underlying senses. The input sentences are collected from 50 different categories of the Bengali text corpus developed in the TDIL project of the Govt. of India, while information about the different senses of particular ambiguous lexical item is collected from Bengali WordNet. In an experimental basis we have used Naive Bayes probabilistic model as a useful classifier of sentences. We have applied the algorithm over 1747 sentences that contain a particular Bengali lexical item which, because of its ambiguous nature, is able to trigger different senses that render sentences in different meanings. In our experiment we have achieved around 84% accurate result on the sense classification over the total input sentences. We have analyzed those residual sentences that did not comply with our experiment and did affect the results to note that in many cases, wrong syntactic structures and less semantic information are the main hurdles in semantic classification of sentences. The applicational relevance of this study is attested in automatic text classification, machine learning, information extraction, and word sense disambiguation. KEYWORDS Natural Language Processing, Bengali Word Sense Disambiguation, Bengali WordNet, Naïve Bayes Classification. 1. INTRODUCTION In all natural languages, there are a lot of words that denote different meanings based on the contexts of their use within texts. Since it is not easy to capture the actual intended meaning of a word in a piece of text, we need to apply Word Sense Disambiguation (WSD) [1-6] technique for identification of actual meaning of a word based on its distinct contextual environments. For example in English, the word ‘goal’ may denote several senses based on its use in different types of construction, such as He scored a goal, It was his goal in life, etc. Such words with multiple meanings are ambiguous in nature and they posit serious challenges in understanding a natural language text both by man and machine. International Journal of Control Theory and Computer Modeling (IJCTCM) Vol.5, No.1, January 2015 The act of identifying the most appropriate sense of an ambiguous word in a particular syntactic context is known as WSD. A normal human being, due to her innate linguistic competence, is able to capture the actual contextual sense of an ambiguous word within a specific syntactic frame with the knowledgebase triggered from various intra- and extra-linguistic environments. Since a machine does not possess such capacities and competence, it requires some predefined rules or statistical methods to do</s>
|
<s>this job successfully. Normally, two types of learning procedure are used for WSD. The first one is Supervised Learning, where a learning set is considered for the system to predict the actual meaning of an ambiguous word within a syntactic frame in which the specific meaning for that particular word is embedded. The system tries to capture contextual meaning of the ambiguous word based on that defined learning set. The other one is Unsupervised Learning where dictionary information (i.e., glosses) of the ambiguous word is used to do the same task. In most cases, since digital dictionaries with information of possible sense range of words are not available, the system depends on on-line dictionaries like WordNet [7-13] or SenseNet. Adopting the technique used in Unsupervised Learning, we have used the Naive Bayes [14] probabilistic measure to mark the sentence structure. Besides, we have a Bengali WordNet and a standard Bengali dictionary to capture the actual sense a word generates in a normal Bengali sentence. The organization of the paper is as follows: in Section 2, we present a short review of some earlier works; in Section 3, we refer to the key features of Bengali morphology with reference to English; in Section 4, we present an overview of English and Bengali WordNet; in Section 5, we refer to the Bengali corpus we have used for our study; in Section 6, we explain the approach we have adopt for our work, in Section 7, we present the results and corresponding explanations; in Section 8, we present some close observations on our study, and in Section 9, we infer conclusion and redirect attention towards future direction of this research. 2. REVIEW OF EARLIER WORKS WSD is perhaps one of the greatest open problems at lexical level of Natural Language Processing (Resnik and Yarowsky 1997). Several approaches have been established in different languages for assigning correct sense to an ambiguous word in a particular context (Gaizauskas 1997, Ide & Véronis 1998, Cucerzan, Schafer & Yarowsky 2002). Along with English, works have been done in many other languages like Dutch, Italian, Spanish, French, German, Japanese, Chinese, etc. (Xiaojie & Matsumoto 2003, Cañas, Valerio, Lalinde-Pulido, Carvalho & Arguedas 2003, Seo, Chung, Rim, Myaeng & Kim 2004, Liu, Scheuermann, Li & Zhu 2007, Kolte & Bhirud 2008, Navigli 2009, Nameh, Fakhrahmad & Jahromi (2011). And in most cases, they have achieved high level of accuracy in their works. For Indian languages like Hindi, Bengali, Marathi, Tamil, Telugu, Malayalam, etc., effort for developing WSD system has not been much successful due to several reasons. One of the reasons is the morphological complexities of words of these languages. Words are morphologically so complex that there is no benchmark work in these languages (especially in Bengali). Keeping this reality in mind we have made an attempt to disambiguate word sense in Bengali. We believe this attempt will lead us to the destination through the tricky terrains of trial and error. International Journal of Control Theory and Computer Modeling (IJCTCM)</s>
|
<s>Vol.5, No.1, January 2015 In essence, any WSD system typically involves two major tasks: (a) determining the different possible senses of an ambiguous word, and (b) assigning the word with its most appropriate sense in a particular context where it is used. The first task needs a Machine Readable Dictionary (MRD) to determine the different possible senses of an ambiguous word. At this moment, the most important sense repository used by the NLP community is the WordNet, which is being developed for all major languages of the world for language specific WSD task as well as for other linguistic works. The second task involves assigning each polysemic word with its appropriate sense in a particular context. The WSD procedures so far used across languages may be classified into two broad types: (i) knowledge-based methods, and (ii) corpus-based methods. The knowledge-based methods obtain information from external knowledge sources, such as, Machine Readable Dictionaries (MRDs) and lexico-semantic ontologies. On the contrary, corpus-based methods gather information from the contexts of previously annotated instances (examples) of words. These methods extract knowledge from the examples applying some statistical or machine learning algorithms. When the examples are previously hand-tagged, the methods are called supervised learning and when the examples do not come with the sense labels they are called unsupervised learning. 2.1 Knowledge-based Methods These methods do not depend on large amount of training materials as required in supervised methods. Knowledge-based methods can be classified further according to the type of resources they use: Machine-Readable Dictionaries (Lesk 1986); Thesauri (Yarowsky 1992); Computational Lexicon or Lexical Knowledgebase (Miller et al. 1990). 2.2 Corpus-based Methods The corpus-based methods also resolute the sense through a classification model of example sentences. These methods involve two phases: learning and classification. The learning phase builds a sense classification model from the training examples and the classification phase applies this model to new instances (examples) for finding the sense. 2.3 Methods Based on Probabilistic Models In recent times, we have come across cases where various statistics-based probabilistic models are being used to carry out the same task. The statistical methods evaluate a set of probabilistic parameters that express conditional probability of each lexical category given in a particular context. These parameters are then combined in order to assign the set of categories that maximizes its probability on new examples. The Naive Bayes algorithm (Duda and Hart 1973) is the mostly used algorithm in this category, which uses the Bayes rule to find out the conditional probabilities of features in a given class. It has been used in many investigations of WSD task (Gale et al. 1992; Leacock et al. 1993; Pedersen and Bruce 1997; Escudero et al. 2000; Yuret 2004). In addition to these, there are also some other methods that are used in different language for WSD task, such as, methods based on the similarity of examples (Schutze 1992), k-Nearest International Journal of Control Theory and Computer Modeling (IJCTCM) Vol.5, No.1, January 2015 Neighbour algorithm (Ng and Lee 1996), methods based on</s>
|
<s>discursive properties (Gale et al. 1992; Yarowsky 1995), and methods based on discriminating rules (Rivest 1987), etc. 3. KEY FEATURES OF BENGALI MORPHOLOGY In English, compared to Indic languages, most of the words have limited morphologically derived variants. Due to this factor it is comparatively easier to work on WSD in English as it does not pose serial problems to deal with varied forms. For instance, the verb eat in English has five conjugated (morphologically derived) forms only, namely, eat, eats, ate, eaten, and eating. On the other hand, most of the Indian languages (e.g., Hindi, Bengali, Odia, Konkani, Gujarati, Marathi, Punjabi, Tamil, Telugu, Kannada, Malayalam, etc.) are morphologically very rich, varied and productive. As a result of this, we can derive more than hundred conjugated verb forms from a single verb root. For instance, the Bengali verb ����� (khāoyā) “to eat” has more than 150 conjugated forms including both calit (colloquial) and sādhu (chaste) forms, such as, ��� (khai), ��� (kkās), ��� (khāo), ��� (khāy), ��� (khān), ��� (khācchi), ��� � (khācchis), �� (khāccha), �� � (khācchen), �� (khācche), ��� (khāitechi), ����� (kheyechi), ���� (kheyecha), ���� (kheyechis), ��� (kheyeche), ���� (kheyechen), ���� (khelam), ��� (kheli), ��(khele), �� (khela), ��� (khelen), ��� (khāba), ���� (khābi), ��� (khābe), ���� (khāben), ��� (khācchilām), ��� (khācchile), ��� (khācchila), ��� � (khācchilen), ��� � (khācchili), etc. (to mention a few). While nominal and adjectival morphology in Bengali is light (in the sense that the number of derived forms from an adjective or a noun, is although quite large, in not up to the range of forms derived from a verb), the verbs are highly inflected. In general, nouns are inflected according to seven grammatical cases (nominative, accusative, instrumental, ablative, genitive, locative, and vocative), two numbers (singular and plural), a few determiners like, -�� (-ṭā), -� (-ṭi), -��� (-khānā), -��� (-khāni), and a few emphatic markers, like -� (-i) and -� (-o), etc. The adjectives, on the other hand, are normally inflected with some primary and secondary adjectival suffixes denoting degree, quality, quantity, and similar other attributes. As a result, to build up a complete and robust system for WSD for all types of morphologically derived forms tagged with lexical information and semantic relations is a real challenge for a language like Bengali [15-25]. 4. ENGLISH AND BENGALI WORDNET The WordNet is a digital lexical resource, which organizes lexical information in terms of word meanings. It is a system for bringing together different lexical and semantic relations between words. In a language, a word may appear in more than one grammatical category and within that grammatical category it can have multiple senses. These categories and all senses are captured in the WordNet. WordNet supports the major grammatical categories, namely, Noun, Verb, Adjective, and Adverb. All words which express the same sense (same meaning) are grouped together to form a single entry in WordNet, called Synset (set of synonyms). Synsets are the basic building blocks of WordNet. It represents just one lexical concept for each entry.</s>
|
<s>WordNet is developed to remove ambiguity in cases where a single word denotes more than one sense. International Journal of Control Theory and Computer Modeling (IJCTCM) Vol.5, No.1, January 2015 The English WordNet [26] available at present contains a large list of synsets in which there are 117097 unique nouns, 11,488 verbs, 22141 adjectives, and 4601 adverbs (Miller, Beckwith, Fellbaum, Gross, and Miller 1990, Miller 1993). The semantic relations for each grammatical category, as maintained in this WordNet, may be understood from the following diagrams (Fig. 1 and Fig. 2): Figure 1. Noun Relations in English WordNet Figure 2. Verb relations in English WordNet The Bengali WordNet [27] is also a similar type of digital lexical resource, which aims at providing mostly semantic information for general conceptualization, machine learning and knowledge representation in Bengali (Dash 2012). It provides information about Bengali words from different angles and also gives the relationship(s) existing between words. The Bengali WordNet is being developed using expansion approach with the help of tools provided by Indian Institute of Technology (IIT) Bombay. In this WordNet, a user can search for a Bengali word and get its meaning. In addition, it gives the grammatical category namely, noun, verb, adjective or adverb of the word being searched. It is noted that a word may appear in more than one grammatical category and a particular grammatical category can have multiple senses. The WordNet also provides information for these categories and all senses for the word being searched. Apart from the category for each sense, the following set of information for a Bengali word is presented in the WordNet: (a) Meaning of the word, (b) Example of use of the word (c) Synonyms (words with similar meanings), (d) Part-of-speech, (e) Ontology (hierarchical semantic representation), International Journal of Control Theory and Computer Modeling (IJCTCM) Vol.5, No.1, January 2015 (f) Semantic and lexical relations. At present the Bengali WordNet contains 36534 words covering all major lexical categories, namely, noun, verb, adjective, and adverb. 5. THE BENGALI CORPUS The Bengali corpus that is used in this work is developed under the TDIL (Technology Development for the Indian Languages) project, Govt. of India (Dash 2007). This corpus contains text samples from 85 text categories or subject domains like Physics, Chemistry, Mathematics, Agriculture, Botany, Child Literature, Mass Media, etc. (Table 1) covering 11,300 A4 pages, 271102 sentences and 3589220 non-tokenized words in their inflected and non-inflected forms. Among these total words there are 199245 tokens (i.e., distinct words) each of which appears in the corpus with different frequency of occurrence. For example while the word ���� (māthā) “head” occurs 968 times, ����� (māthāy) “on head” occurs 729 times, ����� (māthār) “of head” occurs 398 times followed by other inflected forms like ����� (māthāte) “in head”, ������ (māthāṭā) “the head”, ����� (māthāṭi) “the head”, ������ (māthāgulo) “heads”, ������ (māthārā) “heads”, ������ (māthāder) “to the heads”, ������ (māthāri) “of head itself” with moderate frequency. This corpus is exhaustively used to extract sentences of a particular word required for</s>
|
<s>our system as well as for validating the senses evoked by the word used in the sentences. 6. PROPOSED APPROACH In the proposed approach, we have used Naive Bayes probabilistic model to classify the sentences based on some previously tagged learning sets. We have tested the efficiency of the algorithm over the Bengali corpus data stated above. In this approach we have used a sequence of steps (Fig. 3) to disambiguate the sense of māthā (head) – one of the most common ambiguous words in Bengali. The category-wise results are explained in results and evaluation section. 6.1 Text annotation At first, all the sentences containing the word ‘māthā’ are extracted from the Bengali text corpus (Section 5). Total number of sentence counts: 1747. That means there are at least 1747 sentences in this particular corpus where the word ���� (māthā) has been used in its non-inflected and non-compounded lemma form. However, since the sentences extracted from the corpus are not normalized adequately, these are passed through a series of manual normalization for (a) separation or detachment of punctuation marks like single quote, double quote, parenthesis, comma, etc. that are attached to words; (b) conversion of dissimilar fonts into similar ones; (c) removal of angular brackets, uneven spaces, broken lines, slashes, etc. from sentences; and (d) identification of sentence terminal markers (i.e., full stop, note of exclamation, and note of interrogation) that are used in written Bengali texts. 6.2 Stop word removal The very next stage of our strategy was the removal of stop words. Based on traditional definition and argument, we have identified all postpositions, e.g., ��� (dike) “towards”, ��� (prati) “per”, etc.; conjunctions, e.g., ��� (ebang) “and”, ��� (kintu) “but”, etc.; interjections, e.g., ��! (bāh) International Journal of Control Theory and Computer Modeling (IJCTCM) Vol.5, No.1, January 2015 “well”, ��� (āhā) “ah!”, etc.; pronouns, e.g., ��� (āmi) “I”, �� �� (tumi) “you”, �� (se) “she” etc.; some adjectives, e.g., (lāl) “red”, �� (bhālo) “good”, etc.; some adverbs, e.g., ��� (khub) “very”, ��� (satyi) “really”, etc.; all articles, e.g., ��� (ekṭi) “one”, etc. and proper nouns, e.g., ��� (rām) “Ram”, ����� (kalkātā) “Calcutta”, etc. as stop words in Bengali. To identify stop words the first step was to measure frequency of use of individual words, which we assumed, would have helped us to identify stop words. However, since the term frequencies of stop words were either very high or very low, it was not possible to set a particular threshold value for filtering out stop words. So, in our case, stop words are manually tracked and tagged with the help of a standard Bengali dictionary. 6.3 Learning Procedure As the proposed approach is based on supervised learning methodology, it is necessary to build up a strong learning set before testing a new data set. In our approach, we have therefore used three types of learning sets that are built up according to three different meanings of the ambiguous word ���� (māthā) “head”, which are collected from the Bengali WordNet. There</s>
|
<s>are five types of dictionary definition (glosses) for the word with twenty five (25) varied senses in the WordNet, such as the followings: i. Category: Noun Synonyms: ����, �!�, ��; Concept: ����� "প�� ��� ����� $�% Example: ���� ����� ��� �� � ��� �ii. Category: Noun Synonyms: ����, ��&; Concept: %�'� (�� ����� �� "প�� ��� �(����� $�% �)��� �*��, ���, ���, ��� �� ��� $+ ��� ��� )�� ���!- ��Example: ����� �.�� �(�� / ����0� ��1� �)� প�iii. Category: Noun Synonyms: ���� Concept: %�'�� ��� $�% )�� �, ��!- ��Example: ������ ����� *� ��� iv. Category: Noun Synonyms: ���� Concept: ��2�� �� 3�� $4��( Example: ���� ��5�� ������ 3� ��2��� ����� �(� ��v. Category: Noun Synonyms: ���� Concept: ����� 6*� ���, �� ��7 ��� �%�� Example: �) ���8� ����� �* ������ ��, ��� ������ ���� It is observed from the categories that the first category represent first type of meaning (���� = �!� ��), 2 and 3 category represent second type of meaning (���� = ��&), 4 and 5 category represent third type of meaning ($4��( = �%�� = �।:��(). Based on the information types we have International Journal of Control Theory and Computer Modeling (IJCTCM) Vol.5, No.1, January 2015 built up three specific categories of senses of ���� (māthā): (a) “�!�, ��”, (b) “����, ��&” and (c) “$4��(, �%��, �।:��(”. After this we have randomly chosen 55 sentences of each type of sense from the corpus to build the learning sets. 6.4 Modular representation of the proposed approach In this proposed approach all the sentences containing ���� (māthā) in the Bengali corpus are classified in three pre-defined categories using Naive Bayes (NB) classifier in the following manner (Fig. 3): Figure 3. Overall procedure is represented graphically 6.4.1 Explanation of Module 1: Building NB model In the NB model the following parameters are calculated based on the training documents: • |V| = the number of vocabularies, means the total number of distinct words belong to all the training sentences. • P(ci) = the priori probability of each class, means the number of sentences in a class / number of all the sentences. • ni = the total number of word frequency of each class. • P(wi | ci) = the conditional probability of keyword occurrence in a given class. To avoid “zero frequency” problem, we have applied Laplace estimation by assuming a uniform distribution over all words, as- P(wi | ci) = (Number of occurrences of each word in a given class + 1)/(ni + |V|) 6.4.2 Explanation of Module 2: Classifying a test document To classify a test document, the “posterior” probabilities, P(ci | W) for each class is calculated, as- P(ci | W) = P(ci) x The highest value of probability categorizes the test document into the related classifier. 7. RESULTS AND CORRESPONDING EVALUATIONS We performed testing operation over 271102 sentences of the Bengali corpus. As mentioned in section 6.1, this corpus consists of total 85 text categories of data sets like Agriculture, Botany, Child Literature, etc.</s>
|
<s>in which there are 1747sentences containing the particular ambiguous word Bengali corpus of 271102 Sentences, containing “����” are Un-annotated 1747 sentences Sentences are Module 1: Building the NB Module 2: Classifying a test Sentences are classified into three pre-defined categories International Journal of Control Theory and Computer Modeling (IJCTCM) Vol.5, No.1, January 2015 ���� (māthā). After annotation (Section 6.1), each individual sentence is passed through the Naive-Bayes model and the “posterior” probabilities, P(ci | W) for each sentence is evaluated. Greater probability represents the associated sense for that particular sentence. The category-wise result is furnished in Table 1. The performance of our system is measured on Precision, Recall and F-Measure parameters, stated as- Precision (P) = Number of instances, responded by the system/ total number of sentences present in the corpus containing ���� (māthā). Recall (R) = Number of instances matched with human decision/ total number of instances. F-Measure (FM) = (2 * P * R) / (P + R). Table 1. Performance analysis on the whole corpus.Category Total no of sentence ���� ��/�প� *� 8�/��: Total right Total wrong P R FM right wrong right wrong right wrong Accountancy 0 0 0 0 0 0 0 0 0 NA NA Agriculture 13 1 0 0 0 10 2 11 1 0.85 0.92 Anthropology 24 19 5 0 0 0 0 19 1 0.79 0.88 Astrology 6 4 0 2 0 0 0 6 1 1.00 1.00 Astronomy 3 1 1 1 0 0 0 2 1 0.67 0.80 Banking 1 0 0 0 0 0 1 0 1 0.00 0.00 Biography 50 24 3 17 0 6 0 47 1 0.94 0.97 Botany 3 2 0 0 0 1 0 3 1 1.00 1.00 Business Math 2 0 0 0 0 1 1 1 1 0.50 0.67 Child Lit 86 36 7 21 1 20 1 77 1 0.90 0.94 Criticism 21 5 0 7 2 5 2 17 1 0.81 0.89 Dancing 2 0 0 1 0 0 1 1 1 0.50 0.67 Drawing 3 1 1 1 0 0 0 2 1 0.67 0.80 Economics 16 0 0 3 0 12 1 15 1 0.94 0.97 Education 1 0 0 1 0 0 0 1 1 1.00 1.00 Essay 23 7 1 10 0 5 0 22 1 0.96 0.98 Folk 0 0 0 0 0 0 0 0 0 NA NA GameSport 60 34 4 13 0 9 0 56 1 0.93 0.97 GenSc 25 8 2 5 0 10 0 23 1 0.92 0.96 Geology 0 0 0 0 0 0 0 0 0 NA NA HistoryWar 2 1 0 1 0 0 0 2 1 1.00 1.00 International Journal of Control Theory and Computer Modeling (IJCTCM) Vol.5, No.1, January 2015 HomeSc 38 12 1 3 0 21 1 36 1 0.95 0.97 Humor 33 6 3 16 0 8 0 30 1 0.91 0.95 Journalism 3 0 0 1 0 2 0 3 1 1.00 1.00 Law&Order 11 2 0 5 0 4 0</s>
|
<s>11 1 1.00 1.00 Legislative 6 3 3 0 0 0 0 3 1 0.50 0.67 LetterDiary 31 10 3 6 0 12 0 28 1 0.90 0.95 Library 3 3 0 0 0 0 0 3 1 1.00 1.00 Linguistic 14 1 1 7 1 3 1 11 1 0.79 0.88 Literature 11 8 0 0 1 1 1 9 1 0.82 0.90 Logic 4 0 0 2 2 0 0 2 1 0.50 0.67 Math 7 1 1 1 0 3 1 5 1 0.71 0.83 Medicine 35 20 7 6 0 1 1 27 1 0.77 0.87 Novel 108 25 6 59 2 13 3 97 1 0.90 0.95 Music 2 0 0 0 0 0 2 0 1 0.00 0.00 Other 10 0 1 0 0 9 0 9 1 0.90 0.95 Physics 2 1 0 0 0 1 0 2 1 1.00 1.00 PlayDrama 12 4 1 2 3 2 0 8 1 0.67 0.80 Pol Sc 5 3 0 0 1 1 0 4 1 0.80 0.89 Psychology 6 2 2 0 0 2 0 4 1 0.67 0.80 Religion 7 3 0 2 1 1 0 6 1 0.86 0.92 Scientific 15 4 0 3 1 4 3 11 1 0.73 0.85 Sculpture 1 0 0 0 0 1 0 1 1 1.00 1.00 Sociology 0 0 0 0 0 0 0 0 0 NA NA Textbook 29 6 2 3 0 16 2 25 1 0.86 0.93 TranslatedLit 18 8 1 3 2 4 0 15 1 0.83 0.91 Vetenary 4 2 2 0 0 0 0 2 1 0.50 0.67 Zoology 54 24 8 12 0 10 0 46 1 0.85 0.92 ShortStory 128 50 8 37 3 29 1 116 1 0.91 0.95 MassMedia 809 247 54 232 51 170 55 649 160 1 0.80 0.89 OVER ALL 1747 587 129 483 71 397 80 1467 280 1 0.84 0.91 We have also used a JAVA inbuilt function to handle all types of morphologically derived forms of ���� (māthā) available in the corpus, like, ����� (māthāy), ����� (māthār), ����� ��� (māthābyāthā), International Journal of Control Theory and Computer Modeling (IJCTCM) Vol.5, No.1, January 2015 �����প�� (māthāpichu), ����� (māthāte), etc. We have achieved 100% accuracy in this regard, which results the Precision of the output is 1. We have achieved over all 84% accuracy in case of 1747 sentences. In most of the cases the output is satisfactory, in the sense that it has rightly referred to the intended sense. However, in certain cases, the performance of the system is not up to the mark as it failed to capture the actual sense of the word used in specific syntactic frame. We have looked into each distinct case of failure and investigated the results closely to identify the pitfalls of the results (Section 8). 8. FEW CLOSE OBSERVATIONS The following parameters mattered the most on the output during the execution of the system: • As prior probability of each class (P(ci))</s>
|
<s>depends on the number of sentences in each class, probability is affected due to huge difference between the number of sentences in each class. To overcome this, it is wiser to keep number of sentences in each category constant (55 in our case). • As the total number of word frequency of each class (ni) remains at delimiter, the conditional probability of keyword occurrence in a given class is affected due to the huge inequality between the total number of word frequency in each class. But this incidence is not under any one’s control, because the input sentences are taken from a real life data set. For this reason, results in few cases are derived wrong. • Use of dissimilar Bengali fonts has a very adverse impact on the output. Text data needs to be rendered in uniform font. • At certain cases, irrelevant words used in a sentence have caused inconsistency in final calculation. These sentence structures matter a lot on the accuracy of the results. As for example: a) =�����>�'� �� ����� �+'� ?��������� $��� ��71 � ��� �প@ ��� �প�� ��� ?����� $A% ��ঁ��� ��প8 C��� � ����� ��� ��D��� ��� ��� E�� ���� ���� ��ঁ� �F� ��� ���� 3��(�� ��ঁ � ���� 3��(�� ���� ��প��ঁ3 ��:� ���� �GH �'* �F� �G*'��> ����� ���=��� ��প8 ��� *� � �প�� ��� C��� I0J E����� �.����� ������K,� ��� � �*��� প���� A% ��� �L� । b) *��� 3N���� প� *��� �@�� প� OP QP ��� প� ��R� ��� SPP ���� 3 T.U ���� ����� UOV ��%���� প�� �� ����� �, �W��� )X� ����� �.�� ���প C���� �(���� ��� ��� �W ���� ��� �)� K,� �(���� (�� Y0, ( । etc. • Another major problem is just the opposite of the issue stated above. Here the short length of a sentence (with regard to number of words) is a hindrance in capturing the intended meaning of the word. As for example: � �ঁ ��� ���� ���8 । � � � � � ��� ���� ����� । etc. In case of such sentences, after discarding the stop words, there remains insufficient information into the sentences to sense the actual meaning of the ambiguous word. We have tracked as well as analyzed all the 280 wrong outputs and noted that most of these have occurred due to very long or very short syntactic constructions. 9. CONCLUSION AND FUTURE WORKS In this paper, with a single lexical example, we have tried to show that our proposed approach can disambiguate the sense of an ambiguous word. Except few cases, the result obtained from our system is quite satisfactory according to our expectation. We argue that a stronger and properly International Journal of Control Theory and Computer Modeling (IJCTCM) Vol.5, No.1, January 2015 populated learning set would invariable yield better result. In future, we plan to disambiguate the most frequently used 100 ambiguous words of different parts-of-speech in Bengali. Finally, as the target word used in our experiment is a noun, it is comparatively easy</s>
|
<s>to handle its inflections as the number of inflected forms is limited. In case of a verb, however, we may need a stemmer for retrieving the stem or lemma from a conjugated verb form. REFERENCES [1] Ide, N., Véronis, J., (1998) “Word Sense Disambiguation: The State of the Art”, Computational Linguistics, Vol. 24, No. 1, Pp. 1-40. [2] Cucerzan, R.S., C. Schafer, and D. Yarowsky, (2002) “Combining classifiers for word sense disambiguation”, Natural Language Engineering, Vol. 8, No. 4, Cambridge University Press, Pp. 327-341. [3] Nameh, M., S., Fakhrahmad, M., Jahromi, M.Z., (2011) “A New Approach to Word Sense Disambiguation Based on Context Similarity”, Proceedings of the World Congress on Engineering, Vol. I. [4] Xiaojie, W., Matsumoto, Y., (2003) “Chinese word sense disambiguation by combining pseudo training data”, Proceedings of The International Conference on Natural Language Processing and Knowledge Engineering, Pp. 138-143. [5] Navigli, R. (2009) “Word Sense Disambiguation: a Survey”, ACM Computing Surveys, Vol. 41, No.2, ACM Press, Pp. 1-69. [6] Gaizauskas, R., (1997) “Gold Standard Datasets for Evaluating Word Sense Disambiguation Programs”, Computer Speech and Language, Vol. 12, No. 3, Special Issue on Evaluation of Speech and Language Technology, Pp. 453-472. [7] Seo, H., Chung, H., Rim, H., Myaeng, S. H., Kim, S., (2004) “Unsupervised word sense disambiguation using WordNet relatives”, Computer Speech and Language, Vol. 18, No. 3, Pp. 253-273. [8] G. Miller, (1991) “WordNet: An on-line lexical database”, International Journal of Lexicography, Vol.3, No. 4. [9] Kolte, S.G., Bhirud, S.G., (2008) “Word Sense Disambiguation Using WordNet Domains”, First International Conference on Digital Object Identifier, Pp. 1187-1191. [10] Liu, Y., Scheuermann, P., Li, X., Zhu, X. (2007) “Using WordNet to Disambiguate Word Senses for Text Classification”, Proceedings of the 7th International Conference on Computational Science, Springer-Verlag, Pp. 781 - 789. [11] Miller, G. A., Beckwith, R., Fellbaum, C., Gross, D., Miller, K.J., (1990) “WordNet An on-line Lexical Database”, International Journal of Lexicography, 3(4): 235-244. [12] Miller, G.A., (1993) “WordNet: A Lexical Database”, Comm. ACM, Vol. 38, No. 11, Pp. 39-41. [13] Cañas, A.J., A. Valerio, J. Lalinde-Pulido, M. Carvalho, and M. Arguedas, (2003) “Using WordNet for Word Sense Disambiguation to Support Concept Map Construction”, String Processing and Information Retrieval, Pp. 350-359. [14] http://en.wikipedia.org/wiki/Naive_bayes [15] Dash, N.S., (2007) Indian scenario in language corpus generation. In, Dash, Ni.S., P. Dasgupta and P. Sarkar (Eds.) Rainbow of Linguistics: Vol. I. Kolkata: T. Media Publication. Pp. 129-162. [16] Dash, N.S., (1999) “Corpus oriented Bangla language processing”, Jadavpur Journal of Philosophy. 11(1): 1-28. [17] Dash, N.S., (2000) “Bangla pronouns-a corpus based study”, Literary and Linguistic Computing. 15(4): 433-444. [18] Dash, N.S., (2004) Language Corpora: Present Indian Need, Indian Statistical Institute, Kolkata, http://www. elda. org/en/proj/scalla/SCALLA2004/dash. pdf. International Journal of Control Theory and Computer Modeling (IJCTCM) Vol.5, No.1, January 2015 [19] Dash, N.S. (2005) Methods in Madness of Bengali Spelling: A Corpus-based Investigation, South Asian Language Rewiew, Vol. XV, No. 2. [20] Dash, N.S., (2012), From KCIE to LDC-IL: Some Milestones in NLP Journey in Indian Multilingual Panorama. Indian Linguistics. 73(1-4): 129-146. [21] Dash,</s>
|
<s>N.S. and B.B. Chaudhuri, (2001) “A corpus based study of the Bangla language”, Indian Journal of Linguistics. 20: 19-40. [22] Dash, N.S. and B.B. Chaudhuri, (2001) “Corpus-based empirical analysis of form, function and frequency of characters used in Bangla”, Published in Rayson, P. , Wilson, A. , McEnery, T. , Hardie, A. , and Khoja, S. , (eds.) Special issue of the Proceedings of the Corpus Linguistics 2001 Conference, Lancaster: Lancaster University Press. UK. 13: 144-157. 2001. [23] Dash, N.S. and B.B. Chaudhuri., (2002) Corpus generation and text processing, International Journal of Dravidian Linguistics. 31(1): 25-44. [24] Dash, N.S. and B.B. Chaudhuri, (2002) “Using Text Corpora for Understanding Polysemy in Bangla”, Proceedings of the Language Engineering Conference (LEC'02) IEEE. [25] Dolamic, L. and J. Savoy, (2010) “Comparative Study of Indexing and Search Strategies for the Hindi, Marathi and Bengali Languages”, ACM Transactions on Asian Language Information Processing, 9(3): 1-24. [26] Jurafsky, D. and J.H. Martin, (2000) Speech and Language Processing, ISBN 81-7808-594-1, Pearson Education Asia, page no: 604. [27] http://www.isical.ac.in/~lru/wordnetnew/ Authors Alok Ranjan Pal has been working as an a Assistant Professor in Computer Science and Engineering Department of College of Engineering and Management, Kolaghat since 2006. He has completed his Bachelor's and Master's degree under WBUT. Now, he is working on Natural Language Processing. Dr. Diganta Saha is currently working as an Associate Professor in Department of Computer Science & Engineering, Jadavpur University, Kolkata. His field of specialization is Machine Translation/ Natural Language Processing/ Mobile Computing/ Pattern Classification. Dr. Niladri Sekhar Dash is currently working as an Assistant Professor in Department of Linguistics and Language Technology, Indian Statistical Institute, Kolkata. His field of specialization is Corpus Linguistics, Natural Language Processing, Language Technology, Word Sense Disambiguation, Computer Assisted Language Teaching, Machine Translation, Computational Lexicography, Field Linguistics, Graded Vocabulary Generation, Applied Linguistics, Educational Technology, Bengali Language and Linguistics.</s>
|
<s>Bengali Ethnicity Recognition and Gender Classification using CNN & Transfer LearningProceedings of the SMART–2019, IEEE Conference ID: 46866 8th International Conference on System Modeling & Advancement in Research Trends, 22nd–23rd November, 2019 College of Computing Sciences & Information Technology, Teerthanker Mahaveer University, Moradabad, India390 Copyright © IEEE–2019 ISBN: 978-1-7281-3245-7Abstract— In this paper, we have demonstrated how to apply CNN (Convolutional Neural Network) structured model and transfer learning to identify the ethnicity of Bengali people and it’s a systematic process of gender classification too. We also applied several models of transfer learning like VGG16, Mobilenet, Resnet50, etc. to find out which model is more convenient to get our desired accuracy. But problems arise because there are many Indian people who look like and get dressed up like Bengali since in India many Bengali dwell in when many of them speak Bangla as well! (people of Kolkata along with some other provinces). So, the Bengali people are not only found in Bangladesh but also elsewhere in the world. That’s why our model is based on facial images along with the tradition of their costumes. We tried to build a sophisticated model using CNN and transfer learning for this purpose and we got some tremendous performances applying transfer learning.Keywords: Data Augmentation, Ethnicity Recognition, Gender Classification, Convolutional Neural Network, Transfer Learning, Fine-Tuning, Deep Learning, Bottleneck FeaturesI. Introduction Gender classification and ethnicity recognition have got attention in the research of computer vision recently. There are a handful of works on gender classification and ethnicity recognition from facial images using CNN (Convolutional Neural Network) and a few works have been accomplished using transfer learning where most of them do not include Bottleneck features. So, it must be figured out. Another reason is that, while classifying gender or ethnicity it varies from nation to nation where the tradition of their costumes plays a major role. In this paper, we are focusing on the Bengalis gender classification and ethnicity recognition applying CNN followed by several transfer learning models.So, how are we going to do this? There are differences between male and female facial images where the clothing will definitely cause occlusion because clothing varies from gender to gender mostly in the Bengali people. Moreover, the hairstyle is another factor and hair length is more confusing. For all of these research CNN has been applied widely such as in pattern recognition [01], face detection [02], and action recognition [03]. Our proposed CNN model is a straightforward architecture along with the application of bottleneck features. In spite of having simplicity in our CNN model, we got competitive performance in it and applying transfer learning we achieved even more outstanding performance. The arrangement of this paper goes as follows. In section 2, the related work is introduced, in section 3, the dataset is used, in section 4, the pre-processing of the dataset and our primary CNN model is explained (to estimate the number of nodes, etc.), in section 5, the transfer learning models are applied along with the proposed CNN model</s>
|
<s>and at last, in section 6, the conclusion is drawn. II. Related WorkBased on CNN most of the papers that have been published so far are basically for the prediction of age or gender but on ethnicity, there are only a few papers [04] available. A detailed age classification methods have also been described in those literature review papers [05], [06], [07], [08].Actually, gender classification using images is thought to be a very influential piece of work using CNN because its uses are found all around the globe to classify gender using images and their ethnicity as well. Many papers have been written for a deep analyzation and we can get their ideas and approaches from these references [9]. Golomb et al. [10] were one of those researchers who applied CNN on a small dataset of facial images to classify gender. Yang and Moghaddam had also used SVM (Support-Vector Machine) [11] to classify gender with their best accord and Baluja and Rowley [12] adopted AdaBoost for implementing the same scheme from facial images. Toews and Arbel [13] also showed a lot of viewpoints for presenting a model that could extract many features to classify human gender and age. Gender classification can be classified from facial features, Wang et al. used 19 features [14]. Apart from these, Ekman et al. [15] distinguished two methods to study facial features. They were “message-based” and “sign-based”. We should go through some basic definitions to understand this paper better, some important definitions have been described here,Bengali Ethnicity Recognition and Gender Classification Using CNN & Transfer LearningMd. Jewel1, Md. Ismail Hossain2 and Tamanna Haider Tonni31,2,3Dept. of CSE, Daffodil International University Dhaka, BangladeshE-mail: 1jewel15-8071@diu.edu.bd, 2ismail15-7838@diu.edu.bd, 3tonni15-654@diu.edu.bdAuthorized licensed use limited to: Cornell University Library. Downloaded on August 19,2020 at 10:45:46 UTC from IEEE Xplore. Restrictions apply. Bengali Ethnicity Recognition and Gender Classification Using CNN & Transfer LearningCopyright © IEEE–2019 ISBN: 978-1-7281-3245-7 391A. Deep Convolutional Neural NetworkCNN [16], Convolutional Neural Network is formed by one or many input layers and hidden layers with one or many output layers. Actually, nowadays CNN has been used for remarkable strategies to extract features from images and in visual recognition [17]. After extracting features, a few subsequent layers combine them and finally, the feature maps are encoded in a 1D vector where they are categorized. Generally, we do downsampling in our model nodes till the last layer and in the last layer, our dense layer should be the number of nodes equal to the number of our initial classes. The neural network is trained by a backpropagation algorithm [18]. The equational representation is [19]:j = σ (Σ Wi, j ⊗ Ii + b) (1)This figure (Fig.1), shows the representation of the equation (1) where Cj and bj are the corresponding trainable bias, Wi,j is equivalent to the size of the filters (n×m), Ii, is the feature map from the previous layer, σ introduces to non-linearities. Where σ is either sigmoid, σ(x) = 1/(1+e-x ) or hyperbolic tangent function, σ(x) = tanh(x).Fig. 1: Neural</s>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.