text
stringlengths
41
31.4k
<s>as well as the documents. It generates amapping between words and documents. The word vs concept relationship canessentially create noise due to the randomness of word selection. Some dramaticsimplifications are added in LSA in order to solve this problem. The techniquesare as follows:1. As LDA, it uses the “Bag of Words” representation too. For this represen-tation, the order of the words is not as important as the frequency.2. Word patterns are considered and they are used to connect words with similarmeanings. For example, “bank”, “loan”, “credit” might be used to define afinancial institution.3. To make the solutions tractable it is assumed that each word has only onemeaning.Unlike LDA, LSA/LSI is not a probabilistic model that calculates the dirichletprior over the latent topics. LSA is rather a matrix decomposition technique onthe document-term matrix. A comparison of the LDA and LSA was performedin the dissertation in [23]. In practice there is a trade off while using both ofthese algorithms. LSA has a much faster training time while it lacks in accuracyin comparison with LDA [23]. LSA uses Singular Value Decomposition (SVD)matrix on the training corpus. The SVD used in LSA consists of three matrices[24]. SVD represents a matrix T which is a product as the following:T = KSDTHere K is a topic-keyword matrix, S is a topic-topic matrix, DT is a document-topic matrix where K and D are orthogonal matrices and S is a diagonal matrix.The resultant matrix after multiplication, represents a document by keyword ma-trix. So, in T each document is tagged with a single keyword that describes thedocument. Matrix S is a diagonal matrix that is used to track the relevance be-tween the terms. So, how closely the different terms are related can be identified.The SVD of the matrix is consequently followed up by dimensionality reductionto reduce the noise in the latent space that provides an efficient word relationshipto better understand the semantics of the corpus.SVD, is basically a second order Eigenvector technique and it can consider thefact of having relevance in different terms within a corpus. LSA is an applicationof SVD widely used for text processing in particular. Whereas, LDA takes ona different concept about modelling topics in an unsupervised learning situation.It works on the assumption and estimation of a Dirichlet prior in a BayesianNetwork. So, it is more of a probabilistic model. These priors are the importantpart to start with the calculations because these values are needed to initiallygenerate a probabilistic solution which gets better with time. The priors are thenplugged in with the same principle as a Bayesian network. LDA however, tries todetermine an appropriate prior rather than using the only one that fits all the 2ndorder correlation like the LSA. In terms of accuracy LDA is better but that againdepends on the data and the goal. The dataset that will be used in chapter 4 isnot huge. That is why variations of the LDA will be experimented to observe theperformances.2.2.4 Hierarchical Dirichlet ProcessHierarchical Dirichlet Process (HDP) is another variation of the LDA that willbe used to</s>
<s>compare with. HDP does not need to know the number of topicsapriori like the LDA and this is the reason for which HDP is used for manyapplications of NLP. As far as the LDA is concerned, the number of topics to beextracted from a corpus has to be declared beforehand since the LDA algorithmcan not predict the right number of topics it should extract. However, for the HDPalgorithm the number of topics can be learned from the data and it can generatetopics accordingly. Like LDA, the HDP also uses a dirichlet allocation to capturethe uncertainty in number of topics. A common base distribution is selected foridentifying a set of possible topics for the corpus. After that, the base distributionis used to sample each document to find the possible topics of the corpus. LikeLDA, HDP also considers the number of mixed words for each topic. However, itdoes not make the number of topics fixed for each document. Rather, the numberof topics is also generated by a dirichlet allocation that consequently makes thetopic a random variable. The name Hierarchical comes from the idea to add onemore level on top of the probabilistic model. In a topic model, each document isorganized with words and each document forms from the Bag of Words indexing.Let us assume that the indexing is j = 1,..., J and each indexed group has dataitems from xj1 to xjm .The HDP model is controlled by a base distribution H from which the apriori overdata items are selected. Some other parameters control the number of clusters fromthe unsupervised corpus. The base distribution is denoted as G0. The probabilityof the J th cluster is given with the following formalization:Gj|G0 ≈ DP (α,G0)Here Gj is the probability for the jttopic. DP means the dirichlet process. G0 isfurther formalized according to the following:G0 ≈ DP (α,H)Here, H is the base distribution that has been discussed above. α is the concentra-tion parameter. Now, as the concentration parameter α and the base distributionH are achieved, each data item can be associated with a latent parameter θ asfollows:θji |Gj ≈ Gjxji |θji ≈ F (θji)The above two equations describe how each parameter and each data item areassociated with their dependency variable. The first equation shows how eachparameter for each data item is dependent on the prior distribution given byGj. The second equation shows how each data item is associated with individualparameter and has a distribution of F (θji). This is known as the HDP mixturemodel. This is hierarchically linked to a set of Dirichlet process. AS mentionedin [25] to understand the HDP model, we need to understand that it implementsthe clustering and all the clusters are further shared across groups of data. Thefollowing equation can be considered:G0 =k=1π0kδθ∗kIt is assumed that there are infinite number of atoms supported by the base dis-tribution H. The atoms are denoted by δ and have masses denoted by π. Themasses need to sum up to one since G0 is a probability measure. However as thereis no free lunch, HDP models</s>
<s>have their limitations too. One major limitationwith the HDP is that it does not consider the relationship among the documents.It calculates the mixture of all the documents and achieves the topics from themixture model. This can be inefficient with a document collection where interdocument relationship is important. Also, during the assignment of topics to thedocuments, probabilities of the neighborhood should be more than the ones farapart. Since the HDP model mix all the documents and then assign the topics, itcan not keep track of the topics that are assigned to the relevant documents.2.2.5 Singular Value DecompositionIn linear algebra Singular Value Decomposition (SVD), means the factorization of areal or a complex matrix [26]. In a more formal way, a singular value decompositionof a real or a complex matrix having dimension of m by n and the matrix knownas M can be factorized with the form UV ∗ where U is a m by n real unitarymatrix.is a m by n rectangular diagonal matrix with non negative real numberson the diagonal and V is a m by n complex unitary matrix. Let us have an exampleof SVD. Assume a 4 by 5 matrix M as follows:M =∥∥∥∥∥∥∥∥∥∥1 0 0 0 20 0 3 0 00 0 0 0 00 2 0 0 0∥∥∥∥∥∥∥∥∥∥So the SVD of this matrix according to the formula UV ∗ is as follows:U =∥∥∥∥∥∥∥∥∥∥0 0 1 00 1 0 00 0 0 −11 0 0 0∥∥∥∥∥∥∥∥∥∥Σ =∥∥∥∥∥∥∥∥∥∥2 0 0 0 00 3 0 0 00 05 0 00 0 0 0 0∥∥∥∥∥∥∥∥∥∥V ∗ =∥∥∥∥∥∥∥∥∥∥∥∥∥0 1 0 0 00 0 1 0 00.2 0 0 00.80 0 0 1 00.8 0 0 0 −0.2∥∥∥∥∥∥∥∥∥∥∥∥∥The same thing happens on a LSA algorithm. After we achieve term-documentmatrix, SVD is applied on that matrix to infer the topics. What the SVD doesis it reduces the dimensionality of the matrix extracted from document-termsmatrix. Dimensionality reduction is implemented by ignoring the small singularvalues in the diagonal matrix. Regardless of the number of singular values setto zero, the resulting matrix which is the matrix achieved after the calculationretains its original shape. It does not drop any columns or rows from the originalmatrix. Apparently, the effects of the dimensionality reduction only reflects in thedecomposed version.Consider for example a large matrix consisting of many rows and columns. Ifthe matrix is of rank 1 that means the columns and rows will only span a one-dimensional space. From this matrix, only non zero singular matrix will be ob-tained after the decomposition takes place. So, instead of storing this large matrix,two vectors and one real number can be stored. This results in a reduction of oneorder in magnitude. So, SVD can be considered as an efficient storage mechanismor can be extended as literal dimensionality reduction if the reduced matrix is fedinto the algorithm. This however, depends on what kind of predictions are aimed.Let us now focus on some formalization of the SVD. Assume that a documentcollection which is a D by n matrix where D is the document size and</s>
<s>n is thenumber of documents. Here the columns are document BOW vectors. Whenthe LSI is applied as SVD to the document it only keeps the largest d << m =min(D,n) minimum singular values that is, let U ′D∗d be the first d columns of U ,S ′D∗d be the submatrix of S, V ′N∗d be the first d columns of V . Then the reducedversion of the matrix will be as follows:U ′D ∗d S ′D ∗d V ′N∗dThis is the rank-d approximation in a least square sense. However, the questionthat arises is what happens to the data instances not in the training set but willoccur in the test set. The answer to the question is, for the new test documentit will be added to the existing data points and compute the SVD on the n+1documents. This is computationally expensive. If the document, however, coincidewith an existing document it will have the same new coordinates [27].2.2.6 Evaluation of TopicsEvaluation of the topic modelling is an important task as any other machinelearning model. With the evaluation process it can be determined how well themodel is working. Topic coherence is a scientific method that measures the humaninterpretability of a topic model. Traditionally perplexity has been used often butit was found that perplexity does not correlate with human annotations at times[28]. On the other hand topic coherence is a topic evaluation method with higherguarantee on human interpretability [29]. So, it can be used to compare differenttopic models along with individual topics. How well correlated the topics arecan be understood with the topic coherence score. Topic coherence measures thesimilarity between each two pair of terms under a topic. Assume that a topic T1has N words from w1 to wn. So the topic coherence computes the sum with thefollowing formula:Coherence =i < jScore(wi, wj)Topic coherence computes the sum of pairwise score of the words w1...wn thatbelong to the topic. It computes all the possible pairs of the words inside a topic.In this research topic coherence is used to evaluate the topics and check how wellthey correlate human annotations. However, human interpretation is still neededto tag the topics.From a mathematical point of view, coherence is the log likelihood of a word wigiven the other word wj. That is how it understands the correlation between apair of words. The Umass measure of coherence introduced by Mimno et al. in[30] is used in this thesis. A pairwise score function is used as follows:SCOREUMass(wi, wj) = logD(wi, wj) + 1D(wi)Here, D(wi, wj) is the count of documents containing both the words wi and wj.D(wi) is the count of the documents containing only wi.2.2.7 Recurrent Neural NetworkRecurrent Neural Network (RNN) is a variation of Neural Network that is widelyused in the natural language processing due to its capability to “remember” pre-vious calculations along the process. The idea behind a RNN is to involve allthe previous calculations to find out the output for the current layer. Unlike atraditional neural network where all the inputs are independent of each other, ina RNN</s>
<s>each input carries forward to the next layer and an output at any layeris dependent on the previous layers. The name Recurrent comes from the ideaof performing the same task for every element in a sequence. However, the RNNcan also be considered as a memory saving tool. It is a neural network that canremember the result of past calculations and use it to infer the prediction for thenext element in the sequence. Theoretically, it can make use of information inarbitrarily long sequences but in practice, they are limited to looking at only acouple of steps back [31]. A typical RNN is illustrated in Figure 2.1.Figure 2.1: A typical RNN [31]The diagram shows how the unfolding of the network takes place. Unfoldingmeans the expansion of the network for the full sequence. For example if thereis a sequence of 5 words, the network would have 5 layers after the expansion orunfolding takes place. The formulas that control the calculations are as follows:• Xt is the input to a layer t. That means Xn could be a vector correspondingto the nth word in the sequence that the RNN is working on.• St is a hidden value at layer t. It can be considered as the “memory” of thenetwork which has all the previous calculations involved. St is calculatedbased on the previous S value that is St−1 and the current input value St =f(UXt +WSt−1) where W is a weight applied to the input. The function fis a nonlinear function usually tanh or Relu.• Ot is the output at step t. It is usually calculated with a Softmax function.Ot = Softmax(V St).RNNs are widely used in predicting the semantic meaning of a sentence and thuscan be extensively used in sentiment analysis which is the case in this thesis. Avariation of RNN will be used for sentiment analysis with the Bangla languagewhich is the second contribution and covers a full chapter. LSTM is used whichis described in the next subsection.2.2.8 Long Short Term MemoryLong Short Term Memory (LSTM) is a variation of the RNN. LSTM is basicallya combination of various RNNs with the objective to memorize certain part ofthe input sequence. A lot of times it is important to memorize a certain block ofthe data but not all and that needs to be controlled with some techniques. Forexample, if there is a need to predict the sales rate of a particular product duringthe christmas season then it needs to look at the sales data around the month ofDecember of the previous years, considering the data around the other monthsmakes no sense in this case. For this kind of task LSTM is used which rememberscertain calculations. However, how much it should remember can be controlled bycalculations. Theoretically, RNN can also remember certain part of the previousinput but it suffers from two problems. Vanishing Gradient and Exploding Gra-dient [32]. To solve this problem, LSTM was later introduced that incorporatesa memory unit known as cell into the network. A typical LSTM network is illus-trated in Figure 2.2.Figure 2.2:</s>
<s>A typical LSTM [32]Initially LSTM might look like a complicated neural network but as it is explainedstep by step the diagram will start to make sense. For the time being, let us onlyfocus on the input and output of the LSTM unit. Three inputs are taken by thisnetwork. Xt is the input taken for the current time step. Ht−1 is the output fromthe previous LSTM unit that merges with the current input. Ct−1 is the memorycoming in from the previous time step and a needed portion of this memory isthen combined with the current input and the previous output. Then the nextoutput, next memory is formed and the layers continue like this.In LSTM, two more important things to consider is the “Forget” gate and “Mem-ory” gate denoted by the × and + symbol respectively in diagram in Figure 2.2.These two parts of the LSTM unit acts like a water valve. If assumed that thememory that passes along the LSTM chain units is the water then this water needsto be controlled with the forget valve. What the forget valve does is that it controlshow much of the previous information to pass through the next unit. Mathemat-ically, this is an element wise vector multiplication. If the previous memory Ct−1is multiplied with a vector that is close to zero that means forgetting the most ofthe previous memory. Next, is the memory valve. With this, new input comesinto the flow and will merge with the old memory. This is known as the memoryunit. Mathematically, this is a element wise summation. After this operation, theold memory Ct−1is transformed to Ct. The mathematical calculation that takesplace inside those valves will be discussed in the chapter where a model for thesentiment analysis is proposed. How the output is shaped along the path will alsobe discussed.2.2.9 Gated Recurrent UnitIn this section the Gated Recurrent Unit (GRU) is explained which is a specialkind of neural network and is used in this thesis for the sentiment analysis. Thisneural network was introduced by Cho et al. in 2014 in [33] with the objective tosolve the vanishing gradient problem which is pretty obvious with a normal neuralnetwork. GRU is basically a variation of the LSTM with slight changes in thearchitecture of the gates and the functionalities. What makes the GRU specialfrom the standard RNNs is that they use update gates and reset gates to controlthe flow of information along the layers from one unit to the other. It can handlethe long term dependency problem well enough and is trained to keep informationfor relatively longer amount of time compared to LSTM. To understand the GRUbetter a single unit example is taken and the maths behind the screen is explained.Figure 2.3: A recurrent neural network with a gated recurrent unit [32]As can be seen from Figure in 2.3, this is a recurrent neural network with a gatedrecurrent unit. A more detailed diagram is shown in Figure 2.4.Figure 2.4: The GRU unit [32]Let us first get familiarized with the symbols in the diagram in Figure</s>
<s>2.4. The +symbol means the addition operation, σ symbol signifies the Sigmoid function thattakes place inside the GRU. Then the � notation signifies “hadamard product”meaning element wise vector multiplication. The tanh symbol means the tanhfunction which takes place inside the model. The work flow of the GRU is explainedin a step by step process.2.2.9.1 Update GateThe update gate starts with the following equation:zt = σW (z)xt + U (z)ht−1In this formula xt is the input that is coming into the unit at time stamp t andht−1 holds the information from the previous unit. What the update gate doesis it goes through a merge technique. However, when merging the current inputwith the previous output an important question that arises is, how much of eachinformation to take along and pass through the next phase. To control the amountof information, weights are applied to them. As can be seen from the aboveequation, wz is the weight applied to the input xt and Uz is the weight appliedto the output of the previous unit ht−1 . Once these two vectors are simply addedwhich is considered as “merging”, they are transformed to a value between 0 and1 by the sigmoid function.Figure 2.5: Diagram showing the sigmoid activation for merge [32]The powerful thing about the GRU is that it can combine the exact amount ofinformation from the previous time stamp. So, the model can even decide to keepall the past information and eliminate the vanishing gradient problem.2.2.9.2 Reset GateAnother important part of the GRU is the reset gate. It is essentially used todetermine how much of the previous information to forget. Since the informationachieved at each unit does not have the same importance to the output, it isseldom needed to have control over the information to forget. The same formulais used as the above for the update gate with different weights.Figure 2.6: GRU reset function [32]In Figure 2.6 ht−1 and rt are added with their corresponding weights to forget thenecessary amount of information and compress the added value between 0 and 1with the sigmoid function.2.2.9.3 Current Memory ContentThe memory content is used to hold information for later use along the unitstowards the final output. The reset gate is used to store the memory informationof the past and the calculation is done as follows:h′t = tanh(Wxt + rt.Uht−1)As we can see from the formula, the input xt is multiplied with its weight W .Next, the element wise vector product of the reset gate’s output rt and previousinformation ht−1 is calculated. This determines what to remove from the previoustime step. This is where the whole GRU turns out to be interesting. For thesentiment analysis work let assume a sentence - The book talks about scienceand then there are lot more sentences in one paragraph. The paragraph finallyends with - I did not quite like the book. We want to estimate the sentimentfrom this paragraph. So, as the neural network model approaches the end ofthe sentence it understands that the sentiment can be understood from the lastsentence by wights</s>
<s>adjustment. As a result it does not need the information aboutthe other part of the paragraph. So, it washes out the previous information bysetting the rt value almost close to 0 a.k.a a very low information is passed throughfrom the previous sentences. It can only focus on the last sentence to predict theoverall sentiment. After that the result from xt and the vector multiplication aresummed together. The non-liniear activation function tanh is applied which isreported in the Figure 2.7:Figure 2.7: Diagram showing GRU tanh function [32]As can be seen from this diagram the reset gate output rt and previous informationht−1 is vector multiplied and summed up with the input xt. Finally tanh activationfunction is used. That turns the new information for time-stamp t as ht.2.2.9.4 Final memory at Current Time-StampAt this final step, the GRU calculates the ht for the current time-stamp. This isthe information that has been achieved so far and it will be passed down the lineto the next unit and continues till the output layer. The update gate is used forthis purpose. It calculates the necessary information for the current phase h′t andthe previous phase ht−1 with the following formula:ht = zt ∗ ht−1 + (1− zt) ∗ h′tAt first, zt from the update gate is multiplied with the previous phase informationht−1. Second, 1− zt is multiplied with current phase information h′t. These multi-plications are summed together. With this technique, situation where a sentimentsits at the front of a sentence and is needed to carry along this information to thelast part of a big text can be handled. By doing this it can still remember theinformation received at the beginning. However, when the sentiment changes overthe course of the document, for example, positive at the beginning and negative atthe end, it might generate incorrect result for a two class sentiment analysis. Tosolve this problem, more class can be added in the sentiment or perhaps weightscan be applied. When the most relevant information is at the beginning then themodel can learn to set the vector zt close to one and keep most of the information.However, as zt becomes a value close to one, 1 − zt declines and approaches zerowhich is irrelevant for the prediction anyway. The following diagram illustratesthis:Figure 2.8: Diagram showing GRU function [32]2.2.10 Evaluation of Sentiment Analysis ModelLike any other machine learning model, the model in the sentiment analysis taskcan be evaluated with training and testing accuracy. Comparison between thetraining and testing loss indicates how well the model works. The lower the loss,the better the model works. In practice, loss is reduced at each iteration of themodel. The loss reduction rate is observed. How many epochs should be taken canalso be determined by observing the gradual reduction in the training and testingloss. Loss is not a percentage like the accuracy. However, it is a summation of thefalse positive and false negative on each example during training and testing.On the other hand, accuracy is the true positive and true negative values. Forthis kind of machine learning</s>
<s>models accuracy is also compared to evaluate themodel performance. For the dataset that is used for sentiment analysis it is splitinto training and testing sets. Finally, training and testing loss and accuracy iscompared.Chapter 3Sentiment AnalysisThis chapter includes a sentiment analysis on Bangla language. A model is pro-posed that reflects better accuracy than the existing sentiment analysis work withthe Bangla language. Data collection and the preprocessing part will be discussedwith the methodology which is proposed in this thesis. All the experimental setupand their results will be discussed in the subsequent sections. A comparison ofthe results with the existing system will be performed. The LSTM algorithm isexplained. Finally the results are discussed with graphical representations.3.1 Data Collection and PreprocessingData have been collected from Facebook using Facebook graph API [34]. Thedata are mostly comments of the user. Reviews from pages, specifically from amobile operator Facebook page. 34,271 thousand comments from Facebook werecollected. All the unnecessary data tuples except those containing Bangla wereremoved. Then the data were tagged manually into Positive, Negative and Neutralclass. Figure 3.1 shows how the data looks like after cleaning all the noise.From Figure 3.1 we can see that the dataset contains columns. The “Text” columncontains the texts and the “Class” column contains all the tags for the correspond-ing text. When the model is trained it can relate the sentences to their class whicheventually leads to the prediction of a test set.Figure 3.1: The dataset for the sentiment analysis workIn Table in 3.1 the data (sentence) counts are shown. As we can see there is notsignificant imblanace in the dataset. However, there are more instances of negativeand neutral comments than positive comments.Table 3.1: Data StatisticsClass CountPositive 8271Negative 14000Neutral 12000Total 342713.2 Character EncodingBefore the dataset can be used to train the model, it needs to be represented ina vector space. There are different methods to represent data into a vector spacesuch as TF-IDF, Bag of Words, Distributed representation of words for exampleWord2vec, Glove etc. However, there are some drawbacks of these methods whenit comes to the word level representation. The main problem with these modelsis that they totally rely on the words of the corpus. Since the sentiment analysisitself is an unsupervised learning, any word that has not been observed in thetraining period may not have contributions towards the test set. So, the resultsmight not be complete. Consequently, the model would not know those wordsand can not predict it’s meaning in the overall scenario. Most research involveswords as the unit of a sentence. However, in [11] Xiang Zhang et al. performedan empirical study on character level text classification by using a convolutionalnet with an English dataset and found that this method works well on the real lifedata or on data generated by users. In this research the character level as well asas the word level representation is explored and two models for both of these unitencoding are proposed. A comparison of these models in terms of their accuracyfor Bangla is performed and it is explained why the character level model worksbetter</s>
<s>than the word level model.The accuracy of these models however depends on many other factors includingthe choice of alphabets, size of the dataset, hyperparameter tuning etc. In thiswork 67 characters of Bangla including space and some special characters are used.There are 49 letters in the Bangla language. However, the special characters areconsidered too. In the following figure all the 67 characters are illustrated:Figure 3.2: CharactersNo numeric characters in Bangla are used. The numeric characters “1” ,“2” and“3” in Bangla were used instead of three other Bangla letters due to the reason thatpython could not recognize them. After that the characters are encoded with aunique ID from the list of the characters. The system will recognize each characterwith their own ID and this will further have benefits in the representations of thesentences. The encoding is illustrated in Figure 3.3. The order in the figure is justa random order for a demonstration purpose.1 21 6253 45 9 10 35...................Figure 3.3: Character encodingThe length of the data instance is l = 1024 and it was observed from the datasetthat most of the sentences are within this limit. However, sentences more than1024 were truncated to 1024 and sentences less than 1024 were padded using 0s.Any other characters than those 67 selected characters are removed using a regularexpression from python implementation.3.3 MethodologyDeep Learning method has been applied successfully to the Natural Language Pro-cessing problems and achieved the state-of-the-art results in this field. RecurrentNeural Net [RNN] is a kind of Neural Network which is used for processing se-quential data [12]. Later on, the researchers found some mathematical problems tomodel long sequences using RNN [13][14]. A clever idea was proposed by Hochre-iter and Schmidhuber to solve this problem. The idea is to create a path and letthe gradient flow over the time steps dynamically [13]. This path can be imaginedas a water flowing path where different sources of water are merging together andwe want to control the flow. It is known as Long Short Term Memory (LSTM)which was talked about in the background section previously in chapter 2. It isa popular and a successful technique to handle long-term dependency problemwhich is the domain for many NLP tasks. One of the variant of LSTM is theGated Recurrent Unit (GRU) proposed by Cho et al. [33]. The difference betweenthe LSTM and GRU is that it merged Forget and Input gates into an Updategate which means it can control the flow of information without the use of memoryunit and it combines cell state and hidden state. The rest of the thing is the sameas LSTM. In [15] Junyoung Chung et al. conducted an empirical study on threetypes of RNN and found that the Gated Recurrent Unit works better than theother two. GRU is also computationally more efficient than LSTM. The followingequations explain how the GRU works from the mathematical points of view.zt = σ(Wz.[ht−1 , xt]) (3.1)rt = σ(Wr.[ht−1 , xt]) (3.2)h‘t = tanh(W.[rt ∗ ht−1 , xt]) (3.3)ht = (1− zt) ∗ ht−1 + zt ∗ h‘t (3.4)Here are</s>
<s>the equations that demonstrate how the hidden state ht is calculatedin GRU. It has two gates, one is the update gate z, another one is the resetgate r. Equation (3.1) and (3.2) show how these two are calculated. The resetgate determines how to combine the new input with the previous memory, and theupdate gate determines how much of the previous memory to keep around. Finally,hidden state ht is calculated as equation (3.4). However, the classification task ofthe sentiment is a step by step process. For example, to classify any sentence fromthe dataset, first it will go through the preprocessing step. Here, all the charactersexcept the defined ones in Figure 3.2 will be filtered out from the sentence andthe remaining sentence will be represented in a vector space. Every character willbe given a numeric id and then it will be padded by zero to 1024 characters (anysentence with more than 1024 character will be compressed down to 1024). Thisvector will be fed through the model and eventually the model maps the inputsentence to a sentiment class. In each hidden layer of the model, more lower leveland meaningful features are extracted from the previous layer such as any usualneural network model and the output layer calculates the Softmax probability ofeach of the class. The class which has the highest probability is the predictedresult.3.4 Proposed ModelIn this research two models are developed. One with the word level representationwhich is the usual case for most RNN work and then a character level model isalso proposed which is the main focus of this work. Furthermore, a comparison ofboth the models in terms of their accuracy is performed. In this section both ofthese models and their architectures will be discussed. The word level is denotedas the baseline model and the second model as the character model.3.4.1 Baseline ModelThe baseline model is basically the word model. In this model each word isconsidered as a unit of the sentence. The architecture of the model is describedwith a graphical illustration. The baseline model consists of one embedding layerwith 80 units and three hidden layers. Out of these three hidden layers two areLSTM layers with 128 units in each. In the third layer, a vanilla layer with 1024unit in each is employed which is equal to the length of each sentence in the corpus.A dropout layer [16] with a probability of 0.3 is also employed between the outputlayer and the last hidden layer.Combining a vanilla layer with the LSTM layer worked better for the purpose ofpredicting the sequence in Bangla. Essentially, the vanilla layer is convenient forpredicting the successor or generating smaller sequences that eventually is fed intothe output layer. A vanilla layer of 1024 units is employed which can individuallyidentify each of the letters. However, for the base model, embedding layer is notused in accordance with the number of letters as the character level model whichwill be described in the next subsection. The reason of using vanilla in combinationwith the LSTM layers is that the vanilla layer itself is</s>
<s>not able to predict longersequences. Let assume an example sentence as the following:Sunny works at GoogleThis is a very simple sentence with only one subject. So it can be tagged as Sunny: Subject and Google : Organization. If the model sees any such type of sequenceit can understand the semantic meaning and the dependency at the lower level.Sunny works at .... : this can be predicted that the blank space may belong to anorganization.WordEncodingFigure 3.4: Word level model architectureHowever, for long and complicated sentence structures the vanilla layer can notpredict them and it does not seem to be a good idea for the corpus to use thevanilla layer at the beginning layers since LSTM can do better in understanding thecomplex dependencies. So the vanilla layer proved to be working better right beforethe output layer when the smaller sequences are already generated by previousembedding and LSTM layers. The architecture of the base model is illustrated inthe Figure 3.43.4.2 Character Level ModelIn this section the character level model is described. The proposed model consistsof embedding layer with 67 units. We have 67 characters including the specialcharacters. In this model the optimized number of hidden layers are three. Outof these 3 layers, two are with 128 GRU units and the last embedding layer is avanilla layer same as before with 1024 units stacked up serially and the last layeris the output layer. A dropout layer of 0.3 between the output layer and the lasthidden layer is employed. The model architecture is illustrated as follows:Figure 3.5: Character level model architecture3.5 ExperimentationThe model was run in 6 epochs with a batch size of 512 and Adam [17] was used asour optimizer. Categorical cross entropy was used as a loss function. The learningrate was set to 0.01 to train the model. Many different hyper parameters (learningrate, number of layers, layer size, optimizer) were used and this gave an optimalresult.The hyper parameters were set by trial and error for the optimal value. Theembedding size was kept to 67 as the dataset has 67 characters and the dropoutwas set to 0.3 between the output layer and the dense layer of the two models.Early stopping was used to avoid overfitting. All the experiments were done inpython library named keras [35] which is a high-level neural networks API. Thedataset were split into training and testing with a 80:20 ratio.3.6 Results and DiscussionThe result that was achieved from the character level model is better than the wordlevel model. 80% accuracy on character level mode and 77% accuracy from ourbaseline model with word level representation was achieved. Recently, sentimentanalysis achieved a highest of 78% accuracy in [7] using LSTM in Bangla with twoclass classification. Figure 3.6 is showing the training and testing loss of the model.Here we can see that after a certain epoch the training loss started decreasing morethan the testing loss. The testing loss on the other hand decreases at a slower ratecompared to the rate of the training loss. The training was stopped at epoch 6resulting in saving the</s>
<s>model from overfitting. Figure 3.7 shows the training andtesting accuracy of the character level model and Figure 3.8 shows the comparisonbetween the two models. The most important observation from the experimentsis that the character-level RNN could work for text classification without the needfor word semantic meanings. It may also extract the information even if the wordis not correct as it goes through each of the characters individually.0 1 2 3 4 5Epoch0.50.60.70.80.91.0training losstesting lossFigure 3.6: Training and testing lossHowever, to observe the performance of this model across different datasets, moreresearch is needed. Nevertheless, the result depends on various factors includingthe size of the dataset, alphabet choice, data quality etc. But the dataset in thisthesis is focused on a specific telecommunication campaign domain. So this modelcan be helpful on some specific applications. The accuracy is calculated as a ratioof correctly classified data and a total number of data from the test set. Theequation is as follows:Accuracy =Tp + Tn(Tp + Tn + Fp + Fn)0 1 2 3 4 5Epoch0.500.550.600.650.700.750.80training accuracytesting accuracyFigure 3.7: Training and testing accuracyModelCharacter Model Word Model100meta-chart.comFigure 3.8: Comparison of the two modelsThe accuracy comes from the confusion matrix. The confusion matrix is basicallya table that allows an insight to explore the exact numbers of true and falsepredictions from the model. It is widely used in machine learning and variousartificial intelligence tasks to observe the performances of different models. Fromthe confusion matrix table a ratio of ground truth and prediction is obtained whichis eventually used as a percentage for performance. The predictions are dividedinto actual and predicted values. The actual values mean what is the reality andthe predicted values gives the result from the model. The true value percentagecan be calculated from the matrix. Let us assume some numbers as an examplefor better understanding the confusion matrix in the table below:Table 3.2: Example confusion matrixpredicted false predicted trueactual false 20 10actual true 20 150As we can see from the table above, we have the actual false and predicted falseas 20 examples from our example dataset. This means, out of 200 examples inthe dataset, 20 examples which were false in reality and predicted as false by themodel too. So these are the right prediction and known as True-Negative denotedas Tn. In the third row and second column there are 20 more examples whichwere true in reality but the model predicted them as False. So these are wrongpredictions and known as False-Negative denoted as Fn. In the second row andthird column the 10 examples those were false in reality but predicted as true.So, again these are wrong predictions and known as False-Positive denoted as Fp.Finally, the last 150 examples were predicted as true and were actually true. So,these are known as True-Positive denoted as Tp. For the accuracy formula, we findout the ratio of the number of correctly predicted examples to all the predictionsas mentioned in the equation.To conclude, this chapter offers a research on character-level RNN for sentimentanalysis in Bangla. The model was compared with a deep learning model.</s>
<s>Com-parison of the two models across different data is one future goal of this work thatwill make it useable in the industries to extract sentiments from the social mediareviews and comments.Chapter 4Topic ModellingIn this chapter the second contribution of the thesis will be described which istopic modelling. The problem statement, the corpus, experimental results andanalysis will be reported. The basic contribution from this chapter is to developa Bangla corpus from the most famous newspaper called “The Daily ProthomAlo”, execute the preprocessing tasks and finally propose a topic model to classifynews. Topic Modelling or any NLP work in general is a lot about preprocessingdata. So a multi-phase preprocessing is done before designing the model. Topicsare extracted and documents are classified according to their topics. Similaritymeasure is proposed along with the topic extraction. An evaluation method isalso proposed to critically analyze the results. A comparison of the performanceof different models in terms of coherence is done to see which variation of LDAworks better on Bangla.4.1 The CorpusCollecting the data for the Bangla language has always been a challenge due toscarcity of resources and unavailability of publicly available corpora. Althoughvarious research has been going on with Bangla regarding NLP, none of thosedatasets are made available for the public. So, it took a while for this thesis justto understand the data and make a structured format on which topic modellingand other experiments can be performed. The dataset used for this thesis is a newscorpus. It is collected from one of the most used and popular newspaper called“The Daily Prothom Alo”. A crawler is developed with python library calledBeautiful Soup. The data has 7134 news articles from many different categories.All the news from January 1, 2018 to March 31st, 2018 were scraped. “The dailyProthom Alo” has an online archive section. The archive section was crawled foreach day’s paper over the mentioned 3 months. The news data were collected ina CSV file. The news data looks as follows:Figure 4.1: The news corpus: CSV fileIn the dataset each row represents an instance from the newspaper. The fea-tures are news, title, reporter, date, category. The methods that are appliedto this dataset are all unsupervised learning. However, we still collect the restof the features so that the dataset can be applied in the future to extend as apublicly available dataset for further research with NLP. The corpus is organizedchronologically from the past news to the most recent ones. This is done in or-der to experiment the topic trends over this period and see how they evolve in aBangladeshi newspaper as a future work. However, the news are not ordered inany sequence in terms of category or anything else. The main objective of thischapter is to apply Topic Modelling techniques to find out the topics and thenclassify the news in terms of the topics the news belong to. For this, we do notneed any sequence for the news. All the news articles were crawled serially oneach day. News of different categories are crawled and saved into the CSV file.While saving</s>
<s>into the CSV file, UTF-8 encoding was used. Without UTF-8 en-coding python is not able to understand the characters for further processing. Weremoved all the hyperlinks from the text to make it more readable to the program.In the prepossessing section it will be discussed in detail.Bangla is a popular peotic and a rich language in the subcontinent. It is thenative language of Bangladesh and also widely used in the west. The languageconsists of 49 letters all together. Bangla language can be divided in Vowel, Vowelmarks, Consonants, Consonant conjuncts, diacritical and other symbols, digits,and punctuation marks. The list of vowels and consonants are given as follows:অ আ ই ঈ উ ঊ ঋ এ ঐ ও ঔ Figure 4.2: Bagla Independent VowelsThe main 11 vowels in Bangla are known as independent vowels. They can be usedas a standalone characters for many words. However, these vowels are sometimesconjugated with the dependent vowels to make specific sounds. The dependentvowels are used at the left, right or the base of the independent vowels. The inde-pendent vowels can also be wrapped by a dependent vowel. Sounds like AA, EE,OO, OI, OU etc. are made with combining the dependent and the independentvowels.◌া ি◌◌ ◌ী ◌� ◌� ◌� �◌◌ �◌◌ �◌◌া �◌◌ৗ Figure 4.3: Bagla Dependent Vowelsক খ গ ঘ ঙ চ ছ জ ঝ ঞ ট ঠ ড ঢ ণ ত থ দ ধ ন প ফ বভ ম য র ল শ ষ স হ ড় ঢ় য় ৎ Figure 4.4: Bagla ConsonantsBangla words are formed in combination of vowels, consonants and sometimesspecial characters. In Figure 4.5 some Bangla words are illustrated which willhave their English romanized pronunciation and translation in the Table 4.1অত এব �যখােন অথবা �কউ স�ক Figure 4.5: Bagla words4.2 The CrawlerA crawler is developed for the purpose of data collection due to the unavailabilityof any structured corpus on Bangla news. A python library called Beautiful Soup isused for this crawler. Open request of the url that needs to be crawled is deliveredinto the python code first. So it can go to that page and look for html divs or hreflinks where it needs to grab the news from. The crawler then finds all the divs thenews texts belong to. Inside each div there is a news title which is the headline.The title text is then collected at the first place. The crawler then gets into thelink to get the detailed news where it has some p tags to fetch the texts from. Atthe same time we fetch the date and time, name of the reporter from that linkand category from some other p tags around that link. After one iteration a singleinstance is collected. Then the iteration continues. Python lists are maintainedto append each of the instances. Once all the instances are collected it is theninserted as rows into a CSV file. For debugging process each 10 days news werecollected at a single run.4.3 Preprocessing and CleaningAs the saying goes, data preprocessing and cleaning is a major part of the imple-mentation. “Garbage</s>
<s>in, Garbage out”. So is true for this dataset. A detailedprocess for the data cleaning will be discussed in this section. The dataset consistof over 7,000 news collected over a 3 months of time. The whole corpus is en-coded in UTF-8. Tokenization, Stop words removal, Vectorization, Bigram modeldevelopment and removing over and under frequent words are all part of the datapreprocessing. The steps are discussed in following subsections.4.3.1 TokenizationFrom the news article collection each of the words is tokenized and put into anarray. The sentences are split to make each token. This process is known asTokenization. A formal definition of tokenization is as follows:“Given a character sequence and a defined document unit, tokenization is thetask of chopping it up into pieces, called tokens and at the same time throwingaway certain characters, such as punctuation” [36]. An example of bangla wordtokenization for the sentence : Aami Banglay Gaan Gai is ami, Banglay,gaan, gaai. Each of the word in the sentence is tokenized and from the imple-mentation point of view, these tokenized words are then appended into a pythonlist for further processing. The main goal of this tokenization is to access eachword and represent them in such a way that the computer can map them withnumeric values. This is where the Bag of Words model comes into play which willbe explained in the next section. Once the words are tokenized and inserted intoa list it is ready for the next step which plays a vital role in data preprocessing:Stop words removal.4.3.2 Stop WordsStop words are the words that do not play any role in finding the semantic mean-ing of the corpus. These words are basically connecting words like prepositions,conjunctions etc. So, before feeding the data into the model, removing the stopwords is necessary. As with English, Bangla language has a lot of stop wordstoo. These are the connecting words just like prepositions and conjunctions etc.However, Bangla being brought up into the NLP world quite recently, there is stillno fully established list of stop words. So we developed an enhanced stoplist forour program. A list of 438 stop words were used. Some of the stop words fromthe list with their English translation is as follows:Table 4.1: Stop wordsBangla Stop Words inEnglish lettersEnglish MeaningOtoab ThereforeJekhane WhereasOthoba OrKi WhatKe WhoTi TheKeu SomeoneShothik RightFrom the above table it is seen that these words have zero contribution to classifya document or provide a semantic meaning. From the news corpus all these wordsare removed. Words occurring frequently over 50 percent of the documents or inless than 20 documents across the whole corpus are also removed.4.3.3 Bag of Words ModelOnce the corpus is tokenized and the stop words are removed it is then ready tomake the bag of words model through which the whole corpus will be vectorized.Vectorization of the corpus is an important task due to its significance in under-standing the words by the machine learning algorithms. Vectorization is a way torepresent the corpus with numbers and the Bag of Words model is used for this.In the construction</s>
<s>process of the Bag of Words model there are few sub processes.The first one is to make a dictionary. From NLP perception, a dictionary consistsof the tokenized words in a list with only unique words. Then each documentmaps that dictionary to represent its vectorized version which only has a number.The numbers only represent the frequency of the words in that document. Anexample is as follows:Dictionary = [coffee, milk, sugar, spoon]Doc = [I love coffee with milk]Vector = [0, 0, 1, 0, 1]In this example, we can see that the bag of words model develops the Dictionary.As the name suggests, it is a model that holds the tokenized unique words in alist. When a document needs to be vectorized it is then mapped to the dictionary.The algorithm goes through each words and maps to the dictionary to identify ifit is present or not. If the word is not found in the dictionary then it is given avalue 0. However, if the word is present in the dictionary then the value is equalto the frequency of the word in the document.4.3.4 BigramFor the task of topic modelling, Bigram creation is an important part. A bigramis a sequence of two adjacent tokens in the corpus occurring frequently [37]. Soa probability is calculated for these words to occur one after another. If theyhave a threshold value these word pairs are combined together and put into a newtoken in the dictionary. Basically bigrams are n-grams with n = 2. A conditionalprobability is calculated for bigrams. Probability of Wn given Wn−1is given withthe following equation:P (Wn|Wn−1) =P (Wn,Wn−1)P (Wn−1)(4.1)An example of a Bigram for English text is “Air”, Traffic. If we have a news dataabout airplanes or air traffic, it is highly likely that these two tokens will occurtogether may be at all times. So, a new token can be inserted into the existinglist as air traffic. Since topic modelling extracts topics and each topic consists ofrelevant words, it is important that these two words combined together fall into asingle topic. In our proposed model, we used the Bigram model due to the reasonthat many times two adjacent words occur together in the corpus. We did not needTrigrams due to the Bengali language structure not having any three consecutivewords meaning a single phrase.4.3.5 Removing Rare and Common WordsAfter the Bigrams are made and the dictionary is enhanced with the Bigrams,we further remove some rare and common words to make the corpus a bit moremeaningful to the proposed model. Words occurring in less than 20 documentsand over 50 percent of the documents are removed. This 50 percent value is athreshold which makes sense as the words occurring in more than 50 percent ofthe documents are some extra stop words those are not listed in the stop wordslist.4.4 Proposed ModelAs the news corpus is ready to train the topic modelling techniques, we formalizethe proposed model on how to train the Bangla news corpus to get the best out ofit in terms of topic modelling. In this subsection we</s>
<s>describe the proposed modelin a step by step process. Our main goal from this research is to find a way toextract the topics from the corpus. In this chapter a methodology is also proposedto find out the right topic a news belong to. This way each news can be classifiedin their right category. The proposed model is illustrated in the following diagram:CorpusPreprocessingBag of WordsApply LDAConversion toDictionaryBigramOptimize No.Of TopicsFigure 4.6: Proposed model for topic extractionThis is the basic structure on how the model works. Once the dictionary is readyand the preprocessing is done, the LDA algorithm is applied. The training is per-formed on 7134 news articles. It is an unsupervised learning. The dictionary needsto be assigned to the id2word parameter in the LDA algorithm. The dictionaryis already set up in the preprocessing section. This whole dictionary goes into themodel and it extracts a number of topics. However, LDA does not know how manytopics it has to extract. A coherence based method is proposed to understand theoptimal number of topics. From that experiment, the right number of topics areassigned as a hyperparameter during training the model. One problem with LDAis that it can get overfitted if too many topics are extracted. So, finding the rightnumber of topic is important.Before the model is trained and the LDA algorithm is run, a coherence-basedexperiment is performed. For this experiment, the number of topics are set to 200and the Coherence VS Topic graph is monitored. We set the value to check thegradual coherence movement across different topics and found that it gets to thepeak at around topic 47. So, we took that number and fed it into the algorithm.The model does not underfit or overfit. Once we get the model trained withour corpus then we want to evaluate the model with some experiments. We haveperformed a cosine similarity check between different news articles. Some news aresimilar and some of them are different. It is expected to have more similarity scorebetween similar news and less similarity between news about different agenda. Weachieved those scores for the trained LDA model. However, cosine similarity canalso be achieved from the Doc2Vec model. That is why we develop the Doc2Vecmodel to compare the cosine similarity score between the LDA and the Doc2Vecand gain an insight on how both of these models work in terms of similarity. Acomparison with other variations of LDA has also been performed.4.5 AlgorithmIn this section the LDA algorithm will be discussed. We will focus on how itextracts topics from the corpus after the algorithm has been trained. For ourmodel we have used the Latent Dirichlet Allocation (LDA) algorithm proposedby Dr. Blei in 2003. So we will first discuss some terminology those are going tobe used frequently for the LDA discussion. First we need to know what a Topic,Term and a Document mean from the LDA points of view.Topics: A topic is a collection of words extracted from the corpus the LDAalgorithm applied on. The words belonging to a topic are coherent and theycollectively mean a single category.</s>
<s>An example of a topic is as follows:Topic 1 : Trump, U.S.A, Immigration, Immigrants, H1BTopic 2: Traffic, Accident, Car, Death, DebrisAs can be seen from these two topic examples, they both contain some wordsor better known as terms from the corpus. However,the words are correlated andcollectively they can be interpreted by a human as a single topic. So, we can tag thetopic 1 as “Immigration in the US” and topic 2 as “Traffic Accidents”. BasicallyLDA collects the topics from the news corpus and then we can use these topicsin various ways to perform different NLP tasks such as Document classification,Sentiment Analysis, Understanding the meaning of any big corpus without goingthrough it manually etc.Terms: From the LDA perspective a Term is a word from the corpus. All theunique words across the complete corpus is known as terms.Documents: A document is a collection of sentences that discuss about a singletopic in a corpus. In most cases a corpus is a collection of different document. Inthis thesis we have 7,134 documents in our corpus talking about different topics.LDA is a bottom up approach. It assumes that each document is a mixture oftopics [22]. Each topic then generates words based on their probabilities. Theprobabilities sum up to one in each topic. LDA backtracks and tries to assumewhich topic would generate this corpus in the first place [22]. For this, LDAfirst generates a document-term matrix which has all the documents and all thewords from the corpus. The values in this matrix represent the frequency of thatterm in the document. Assume we have n documents D1, D2, ..., Dn and m termsK1, K2, ..., Km. So the matrix will look as follows:Table 4.2: Document-Term matrixK1 K2 KmD1 1 0 2D2 3 1 1Dn 0 0 4The matrix can map each term and document with a frequency of words. Gener-ally, this is a sparse matrix with lots of 0s since a single document can have onlyspecific terms and not others. Later, LDA generates another matrix which mapsa topic to a term. Assume we have p topics as T1, T2, ...., Tp and as before m termsK1, K2, ..., Km. The matrix is reported in Table 4.3.In this matrix each term is either assigned a topic or not. So, it is a binary matrixwhere each topic is assigned to a term. However, the topics are not labelled inTable 4.3: Term-Topic matrixK1 K2 KmT1 1 0 0T2 0 1 0Tp 0 0 1LDA. Probability of a topic is calculated and only the top terms for a topic isconsidered to belong to the topic. What LDA does is it goes through each term Kand assigns a topic T with a probability P. This probability comes from two otherprobabilities P1 and P2 as follows:P1 = P (Topic|Document)P2 = P (Term|Topic)Here, probability P1 is the proportion of terms K in the document D that arecurrently being assigned to topic T. Probability P2 is the proportion of assign-ments to topic t over all the documents D that comes from this term K. Thesetwo</s>
<s>probabilities are then multiplied together and that is the probability of thisassignment of the topic to the term. At each run all the terms are then assigneda topic. In the next iteration the probabilities are updated with better topic-termassignments that is more coherent. At some point the model converges with noimprovement in the probabilities. This is when the algorithm stops and all theterms are assigned to their right topics.LDA algorithm has some hyperparameters. The two matrices can be controlledwith Alpha and Beta hyperparameters. Alpha and Beta represent Document-Topic and Topic-Term compactness respectively. The higher the value of Alphathe Documents are composed of more topics and the higher the value of Beta eachtopic is composed of more words. These need to be adjusted according to the goalof the model.Number of terms under each topic is another hyperparameter. For the purpose ofthis thesis it is set to 10. It makes more sense to see 10 words under each topic forthe corpus of our size. LDA is one of the most popular topic modelling algorithmsthat gives state of the art results for English. The proposed model works good onthe Bangla too. The main core parts of the LDA work as Bayesian Network.4.6 ExperimentationThe first experiment that we perform is to understand the number of topics toinfer from the trained model. LDA itself does not understand the optimal numberof topics. So we performed an experiment to understand the optimal number oftopics. It is important to know how many topics we should infer from a trainedLDA corpus. It varies on the dataset and the main goal of the research. Ourpurpose is to infer topics from an online newspaper with about 7,134 news articleinstances. When too many topics are inferred from the LDA model it may getoverfitted which is not useful. On the other hand extracting too few topics does notgive meaningful result. So a coherence based value is considered for understandingthe right number of topics. We have experimented the model with 200 topics alongwith the aggregated coherence value for the topics as follows:0 25 50 75 100 125 150 175 200Number of topics0.150.200.250.300.350.400.45CoherenceFigure 4.7: Coherence based number of topicsAs can be seen from this plot, along with the increment in the number of topics,the coherence value increases rapidly during the beginning and gets to its peakwhen the number of topics is around 47. When we have approximately 47 topicsin this case the model performs the best according to its coherence value. Afterthat the coherence value falls off gradually. We perform this experiment for 200topics which is more than the number of topics a newspaper is likely to have. Forthe purpose of this experiment the model was run 200 times and each time the co-herence value is checked to understand how it changes with the number of topics.At the peak we have a coherence value somewhere around 0.45. So we took 47 asthe right hyperparameter. In the coming section it will be discussed on how thecoherence value makes sense for these topics.An experiment with fewer topics</s>
<s>is also performed. 10 and 20 topics are experi-mented at the beginning level before experimenting with 200 topics.1 2 3 4 5 6 7 8 9Number of topics0.100.150.200.250.30Figure 4.8: Coherence based number of topics (t=10)From these experiments it is seen that none of these reach the coherence valuearound 0.45 which was the highest when done with 47 topics. Although in Figure4.5 we can see that the coherence value is going up at around 0.375 but it doesnot still reach the maximum. So, 47 topics is the optimal number for our purpose.In the subsequent section we will see the extracted topics and their meanings.4.6.1 Topic ExtractionIn this section we will explore the extracted topics and their meanings. So, wehave 47 topics extracted from this experiment. The topics with their Englishtranslation are as follows:2.5 5.0 7.5 10.0 12.5 15.0 17.5Number of topics0.100.150.200.250.300.35coherenceFigure 4.9: Coherence based number of topics (t=20)Table 4.4: Extracted Topics from the optimized LDATag Extracted TopicLife and Culture (0, 0.024*“Festival” + 0.016*“Section” + 0.015*“Cul-tural” + 0.015*“tradition” + 0.014*“Eid” + 0.014*“End”+ 0.013*“Eyes” + 0.013*“Life” + 0.011*“House” +0.009*“Year”)Film Festival (1, 0.037*“Bangladesh” + 0.029*“India” + 0.018*“Fes-tive” + 0.017*“In hand” + 0.016*“Film” + 0.015*“Ben-gali” + 0.014*“Sound” + 0.013*“Movie” + 0.011*“Deny” +0.011*“Step”)BangladeshElection(2, 0.023*“Development” + 0.017*“Past year” +0.016*“Bangladesh” + 0.015*“County” + 0.015*“Thisyear” + 0.014*“Election” + 0.013*“Commitment” +0.012*“Environment” + 0.012*“South coast” + 0.012*“Pro-tect”)Media (3, 0.051*“Song” + 0.033*“Seminar” + 0.030*“News” +0.027*“Press briefing” + 0.026*“Local” + 0.024*“Program”+ 0.024*“Songs” + 0.023*“Name” + 0.023*“Political” +0.022*“Importance”)Continuation of Table 4.4Tag Extracted TopicElection (4, 0.045*“Vote” + 0.032*“Schedule” + 0.032*“Commis-sion’s” + 0.028*“Election” + 0.027*“Election commissioner”+ 0.024*“Candidate” + 0.021*“Announce” + 0.018*“Sec-tion” + 0.015*“Postpone” + 0.015*“Head”)Election (5, 0.074*“Election” + 0.027*“Election’s” + 0.025*“Commis-sion” + 0.023*“Legal” + 0.023*“Polling” + 0.018*“Govern-ment” + 0.017*“Law-order” + 0.014*“Election commission”+ 0.013*“By-law” + 0.013*“Ministry”)Public Exams (6, 0.035*“Power” + 0.029*“Student” + 0.023*“Exam” +0.021*“Management” + 0.021*“Right” + 0.021*“Answer” +0.020*“Question” + 0.019*“Delivery” + 0.018*“Ethical” +0.017*“Politics”)Misc (7, 0.045*“Police” + 0.025*“heat” + 0.024*“facebook” +0.022*“Immigration” + 0.020*“Less” + 0.019*“Achieve” +0.018*“Computer” + 0.017*“Journalist” + 0.017*“Execute”+ 0.015*“Cold”)USA immigra-tion(8, 0.052*“Trump” + 0.044*“President” + 0.025*“USA” +0.023*“Against” + 0.020*“Administration” + 0.019*“Immi-gration” + 0.019*“Complaint” + 0.017*“Foreign affairs” +0.016*“Order” + 0.016*“Direct”)Law (9, 0.048*“Head” + 0.031*“chief justice” + 0.021*“Section”+ 0.017*“Supreme” + 0.015*“Worker” + 0.014*“Court” +0.012*“Justice’s” + 0.012*“Prosecution” + 0.011*“Law” +0.011*“chief justice”)Cinema orMovie(10, 0.020*“In movie” + 0.017*“Acting” + 0.016*“Theme”+ 0.016*“Death” + 0.015*“Audience” + 0.013*“Excitement”+ 0.013*“Profit” + 0.012*“Gradual” + 0.012*“Super hit” +0.012*“In between”)Finance or Poli-tics(11, 0.045*“Money” + 0.039*“Bank” + 0.027*“League” +0.026*“Awami league” + 0.023*“Awami” + 0.020*“Chair-man” + 0.019*“Sonali bank” + 0.018*“Parliament” +0.014*“Financial” + 0.013*“Institution”)Continuation of Table 4.4Tag Extracted TopicPolice or Crime (12, 0.059*“Notice” + 0.024*“Yourself” + 0.023*“Police”+ 0.018*“Police’s” + 0.017*“Hassan” + 0.017*“Arrest” +0.016*“Statement” + 0.014*“Record” + 0.013*“person” +0.013*“Presence”)Misc (13, 0.050*“Abdul” + 0.043*“description” + 0.030*“Life”+ 0.026*“Final” + 0.022*“Bangla” + 0.019*“Form” +0.017*“Dhaka” + 0.017*“Current” + 0.016*“Request” +0.015*“Return”)Movie or Cin-ema(14, 0.052*“Movie” + 0.026*“Regarding” + 0.025*“good” +0.021*“Search” + 0.019*“Interview” + 0.019*“Cinema” +0.019*“Case” + 0.018*“Behind the scene” + 0.018*“Far” +0.017*“Sensor”)Law or Legal Is-sues(15, 0.041*“Shamim” + 0.024*“Deny” + 0.021*“Weekly”+ 0.020*“Issue” + 0.018*“Supreme” + 0.017*“Court” +0.015*“Legal” + 0.014*“Influence” + 0.014*“Prosecution” +0.014*“Statement”)Transportat-ion (16, 0.022*“Destination” + 0.019*“Road” + 0.018*“Main-tenance” + 0.016*“Start” + 0.015*“Continuation”+ 0.013*“Road cleaning” + 0.013*“Highway” +0.013*“Big project” + 0.012*“Look after” + 0.012*“Re-peat”)Accident (17, 0.028*“Land” + 0.025*“Run” + 0.019*“leave”+</s>
<s>0.018*“Ahead” + 0.017*“traffic” + 0.017*“Air” +0.014*“Crashed” + 0.014*“landing” + 0.012*“control” +0.012*“death”)City or govern-ment(18, 0.040*“Mayor” + 0.038*“personal” + 0.026*“Dhaka” +0.018*“league” + 0.016*“team” + 0.016*“Government” +0.015*“Project” + 0.015*“Editorial” + 0.014*“city mayor” +0.013*“National”)Entertainmentor Media(19, 0.085*“of entertainment” + 0.044*“Prothom Alo” +0.035*“Statement” + 0.027*“Favorite” + 0.026*“Positive” +0.019*“Movie” + 0.018*“Sentiment” + 0.017*“Support” +0.016*“Section” + 0.015*“audience”)Continuation of Table 4.4Tag Extracted TopicWater quality (20, 0.042*“Water” + 0.021*“Plenty” + 0.020*“University”+ 0.019*“Dhaka” + 0.017*“Red color” + 0.014*“Dorm”+ 0.014*“Toxic” + 0.013*“Harmful” + 0.013*“Food” +0.012*“Drink”)Misc (21, 0.027*“Postpone” + 0.021*“Flow” + 0.021*“Lift” +0.020*“To live” + 0.019*“perception” + 0.019*“Hospital”+ 0.015*“Drama” + 0.015*“Subject” + 0.014*“to make” +0.014*“blank”)Students andTransportation(22, 0.036*“Students” + 0.024*“Zone” + 0.021*“Par-tial” + 0.020*Transportation” + 0.020*“Methodology” +0.019*“Buses” + 0.019*“Arrange” + 0.018*“Shortage” +0.016*“Issue” + 0.016*”troublesome”)Misc (23, 0.021*“” + 0.020*“Caution” + 0.019*“Competition” +0.016*“Good” + 0.015*“Question” + 0.015*“Shut off” +0.014*“Knowledge” + 0.013*“Story” + 0.013*“Opportunity”+ 0.012*“Protect”)Media award (24, 0.054*“Editor” + 0.045*“Prothom Alo” +0.033*“Achievement” + 0.033*“Name” + 0.030*“Warmth” +0.027*“January” + 0.025*“Presence” + 0.025*“Nomination”+ 0.023*“Best paper” + 0.021*“award”)Flyover project (25, 0.043*“Foreign” + 0.037*“People” + 0.022*“Solution”+ 0.018*“flyover” + 0.015*“Roads” + 0.015*“Made” +0.014*“Invest” + 0.014*“to make” + 0.013*“traffic problem”+ 0.013*“appreciation”)Literature (26, 0.021*“known” + 0.018*“Poem” +0.018*“About friend” + 0.017*“Story” + 0.017*“love”+ 0.015*“Women” + 0.014*“Often” + 0.014*“Wife” +0.014*“Couple” + 0.012*“Understanding”)Bangla Lan-guage(27, 0.034*“regarding” + 0.033*“language” +0.024*“Bangla” + 0.023*“pain” + 0.023*“read” +0.021*“publish” + 0.020*“in bangla” + 0.018*“literature” +0.018*“future” + 0.016*“generation”)Continuation of Table 4.4Tag Extracted TopicHealth andmedicine(28, 0.023*“health” + 0.017*“treatment” + 0.017*“child” +0.015*“medicine” + 0.014*“bangladesh” + 0.014*“physical”+ 0.013*“gain” + 0.013*“young” + 0.013*“national health”+ 0.012*“type”)Wedding and Fi-nance(29, 0.027*“marriage” + 0.021*“gown” + 0.016*“wedding”+ 0.016*“must” + 0.015*“loan” + 0.015*“marriage loan”+ 0.014*“privilege” + 0.014*“support” + 0.014*“states” +0.014*“regarding marriage”)Administrat-ion (30, 0.032*“meeting” + 0.022*“seminar” + 0.022*“amount”+ 0.022*“statement” + 0.021*“mature” + 0.021*“funny” +0.019*“English” + 0.016*“bring about” + 0.014*“place” +0.014*“year”)Lifestyle (31, 0.017*“opportunity” + 0.014*“family” + 0.013*“mid-dle class” + 0.013*“finance” + 0.012*“organization” +0.012*“facility” + 0.012*“department” + 0.012*“bank” +0.012*“micro finance” + 0.011*“repay”)Movie and Cin-ema(32, 0.077*“cinema” + 0.058*“movie” + 0.019*“public” +0.018*“combine” + 0.015*“overnight” + 0.015*“positive”+ 0.015*“act” + 0.013*“girls” + 0.012*profit share” +0.012*“year”)Family and Life (33, 0.021*“Home” + 0.021*“kids” + 0.019*“winter” +0.018*“video” + 0.015*“regarding” + 0.014*“mental” +0.013*“entertainment” + 0.013*“digital” + 0.012*“family” +0.012*“bonding”)Movie (35, 0.026*“Cinema” + 0.020*“Premier” + 0.020*“Director”+ 0.019*“Announce” + 0.017*“Ending” + 0.016*“Ac-tress” + 0.015*“Next movie” + 0.015*“Release” +0.014*“The movie” + 0.012*“Sensor”)Weather andSeason(36, 0.036*“winter” + 0.025*“season” + 0.023*“statement”+ 0.017*“happens” + 0.017*“viral” + 0.016*“most” +0.015*“team” + 0.015*“explanation” + 0.014*“good” +0.012*“world”)Continuation of Table 4.4Tag Extracted TopicEducation loan (37, 0.068*“loan” + 0.043*“child” + 0.031*“repay”+ 0.028*“Bank” + 0.016*“Dhaka” + 0.015*“educa-tion” + 0.014*“national economy” + 0.014*“study” +0.013*“money” + 0.012*“big project”)InformationTechnology(38, 0.030*“digitization” + 0.029*“IT” + 0.027*“infras-tructure” + 0.024*“authority” + 0.021*“statement” +0.020*“technology” + 0.019*“upgrade” + 0.018*“soft-ware market” + 0.017*“project” + 0.015*“economy”)Foreign invest-ment(39, 0.043*“money” + 0.022*“finance” + 0.020*“currency”+ 0.019*“approval” + 0.015*“pair” + 0.015*“law” +0.014*“permission” + 0.013*“home affaris” + 0.012*“amer-ica” + 0.012*“lessen”)Project on IT (40, 0.022*“proposal” + 0.021*“sometimes” + 0.019*“deny,”+ 0.018*“mobile” + 0.017*“states” + 0.014*“done” +0.013*“process” + 0.013*“person” + 0.012*“phone” +0.011*“invest”)Bangladesh Ed-ucation(41, 0.058*“education” + 0.027*“class” + 0.022*“percentage”+ 0.020*“for education” + 0.017*“average” + 0.015*“educa-tion ministry” + 0.014*“curriculum” + 0.014*“efficient” +0.013*“home” + 0.013*“nation wide”)Investment andEconomy(42, 0.031*“invest” + 0.031*“fiscal year” +0.023*“small amount” + 0.019*“previous year” +0.015*“january” + 0.015*“” + 0.015*“million” +0.014*“money” + 0.014*“year” + 0.012*“people”)Foreign Affairs (43, 0.034*“USA” + 0.033*“unstable” + 0.025*“declin-ing economy” + 0.022*“rise” + 0.020*“law” + 0.020*“re-ply trump” +</s>
<s>0.019*“chairman” + 0.018*“Syria” +0.016*“Islamic” + 0.015*“regarding”)Stock Market (44, 0.037*“price” + 0.024*“NewYork” + 0.024*“rapid”+ 0.024*“Share market” + 0.021*“share price” +0.017*“loan issue” + 0.016*“affected” + 0.014*“type”+ 0.013*“category” + 0.013*“news”)Continuation of Table 4.4Tag Extracted TopicMedia (45, 0.059*“book” + 0.032*“award” + 0.031*“people” +0.022*“book fair” + 0.022*“Prothom alo”+ 0.019*“writer”+ 0.017*“from book” + 0.016*“Class” + 0.016*“demand” +0.015*“february”)Power supply (46, 0.047*“past” + 0.027*“year” + 0.027*“collection”+ 0.022*“help” + 0.021*“supply” + 0.020*“past year”+ 0.019*“electricity” + 0.019*“load shedding” +0.015*“power” + 0.015*“achievement”)End of TableThese are all the topics extracted from the newspaper. Each topic consists of 10words related to the topic. Some topics are a bit mixed and the meaning can bedifferent but most the other topics makes sense and can be classified as a categoryfrom the newspaper. However, the topics are not automatically annotated. Byseeing the word groups, the annotation for each topic is made manually since theLDA only provides group of words/terms known as topics. With each word, aprobability is achieved in a descending order.4.6.2 Similarity MeasureSince a trained LDA model already groups topics in terms of their keywords, weundergo through an experiment to explore the cosine similarity measure from ourtrained LDA model. A couple of document pairs are fed into the model and thecosine similarity value is observed. Similarity measure can also be performed witha Doc2Vec model. A comparison of the similarity scores from both the LDA andthe Doc2Vec is explored. The scores are reported in the Table 4.5:A pair of documents from the dataset are fed into both of these models to ob-serve the similarity score. For example, doc 1 and doc2 are two highly relateddocuments. They are both talking about a news on Myanmar Rohingya issue ofBangladesh. As a human interpreter someone will judge these two news articlesas a highly related pair. LDA cosine similarity gives this pair a 97.15% similaritywhich would have been almost close to a human interpreter. On the other hand,Table 4.5: Showing cosine similarity score between different modelsDocument Pairs LSI HDP LDA Doc2Vec(doc5, doc9) 21.01% 0.00% 19.07% 50.91%(doc5, doc6) 83.14% 0.00% 71.63% 72.55%(doc271, doc272) 82.46% 0.00% 68.68% 60.61%(doc1, doc2) 97.09% 29.21% 97.15% 68.54%(doc1, doc513) 28.91% 0.00% 72.45% 30.31%(doc 1916, doc 1060 89.65% 0.00% 80.99% 37.91%Doc2Vec performed poorly and gave it only a 68.54% similarity which is not thecase in reality. Document 1916 and document 1060 are talking about “Technol-ogy”. In this case LDA performs better than the Doc2Vec again. Now lets seehow these models work on dissimilar news. It is expected that two dissimilar newsare likely to have a low cosine similarity score. A similarity check for document 5and document 9 is done where one is about “Sports” and the other news is about“Foreign Affairs”. LDA gives only a 19.07% match where Doc2Vec still gives ita 50.01%. So, LDA performs better than the Doc2Vec for both similarity anddissimilarity.Now let us focus on the LSI model. our LSI model performs better than theLDA in some cases. As we can see, it has a 21.01% cosine similarity betweendoc1 and doc9. This pair of document is however two different topic. In this caseLDA performs better</s>
<s>to understand the dissimilarities. However, when it comes tocalculate cosine similarity for similar documents, LSI performs better. The sameis true for similarity score for doc1, doc2 pair and doc5, doc6 pair. These two pairof documents are closely related in terms of their topics and LSI is capturing therelevance better than the LDA model. One interesting point to note on the LSIexperiment is, it failed significantly in understanding the similarity of documentsthose are far apart from each other in terms of their occurrence order in the corpus.In the dataset, doc1 and doc513 are two documents talking about “Technology”but LSI thinks it has only 28.91% match. The SVD based LSI captures similaritiesa little better than the LDA, however, it does not show a stable result due to beingworse in capturing similarities of two far apart data points.We also have similarity measure with the HDP model which performs the worst ofthese all. As we can see from Table in 4.5, in most of the cases HDP can not captureany similarity at all. It fails to relate the similarities for all the pairs except fordoc1-doc2 pair which is a highly relevant pair in terms of their sentence and wordstructures. For all the other pairs it did not work at all. The mixture model ofHDP does not work. As we have already discussed some of the limitations of HDPmodel in chapter 2, it seems that the HDP can not relate the data points together.This is where HDP fails to interrelate the documents. Also, while assigning topicsto documents, neighboring documents should have a higher probability with thesame topic which is not happening for HDP in the experiment and we can clearlysee this with the values we have from the HDP. Let us take a look at the averagevalue of these models to compare the similarity and dissimilarity from the graphbelow:LSI HDP LDA Doc2VecModelssinsimilaritSimilaritydissimilarityFigure 4.10: Similarity Dissimilarity of Cosine averageThe reason that LDA performs better for Bangla lies in the way it works. GenerallyLDA can map documents to a vector and further works on a cosine similarity.Doc2Vec is a variation of Word2Vec with an extension of a document vector. Onereason for LDA working better is that it can capture the topics already duringthe training time. So the topics help classifying the documents while scoringthe cosine similarity. On the other hand Doc2Vec can understand the semanticmeaning of the sentence inside a single document and can predict the next wordas well. However, Bangla being a complex language in terms of their sentencestructures and unpredictable to know the word’s semantic meaning, Doc2Vec failsto understand the similarity. In this situation, LDA seems to have more capabilityto understand the overall semantic meaning with their topic extraction techniques.4.6.3 Performance Comparison with Other Topic ModelAlgorithmsWe have performed a comparative analysis between different topic models to seehow well they work. The goal of this analysis is to explore on how the differenttopic models perform based on their coherence performance measurement. Theresearch mainly focuses on the LDA algorithm and its different variations. So,we have compared the</s>
<s>LSI, HDP, LDA standalone, LDA modified and LDA incombination with LSI. In this section we will illustrate the algorithm performancealong with the topics from the different algorithms.LSI HDP LDA LDA_Mod LDA_LSIModels0.00.10.20.30.40.50.60.70.8Figure 4.11: Model performance comparisonAs we can see from the graph, we have the LSI model which is an SVD basedalgorithm for topic modelling. The LSI algorithm has around 0.3 coherence value.For our corpus, dimensionality reduction based LSI algorithm did not perform sig-nificantly well as can be seen from the figure. The corpus that we have is only thenews texts and when converted to the bag of words and after that represented asa matrix, dimensionality reduction will not add any significant changes due to thecharacteristics of the dataset. The dataset is sparce and since we do remove thestop words and frequent and rare words in the preprocessing section, dimensional-ity reduction has no major impact on the topic modelling performance. Therefore,LSI is not an ideal topic modelling algorithm for this kind of news classification.Next, we experiment with HDP algorithm having around 0.68 coherence value.This is indeed a significant improvement for the topic modelling. Since the newscorpus has many news articles that can be grouped together, it seems from theexperiment that the HDP mixture model can work better in terms of efficiency.The base distribution H having infinite number of atoms to support is the mainenergy behind the HDP model. The hierarchical layer being added to calculatethe number of topics is also an added advantage for this model. So, overall HDPcan be considered for news classification with Bangla. Although it does not havethe best coherence measure, it can still be used for topic modelling for Banglawith a decent coherence.Next, is the base LDA algorithm. LDA is a probabilistic algorithm that generallyworks efficiently with news classification for English. It is one of the more widelyused algorithms for topic modelling. However, the base LDA model does not showany significant improvement for the corpus when the basic non-tuned hyperpa-rameters are kept. 10 topics are extracted as in the default settings. However,this is not optimized. LDA does not know the number of topics beforehand andit needs to be provided. So, the performance achieved is even worse than the LSImodel. However, with a proper hyperparameter adjustment, LDA can performbetter than LSI which we will see in a bit.After that, another experiment with LDA mod is performed which is basically amodified version of the LDA. The LDA model was run each time with a differentnumber of topics. Iteration takes place for ten times with ten number of topics.Each time the model is saved into a list. The top five models with the highest co-herence value is extracted. Finally, we aggregate the coherence to see this model’sperformance. But this one performed poorly with an overall 0.27 coherence value.The major reason behind this is the lack of optimal number of topics. Since thismodel could not cover all the topics therefore the few extracted topics are not verycoherent with each other and has overall a poor performance.In the next experiment, we propose</s>
<s>a new technique that is known as LDA iterativeapproach. In this experiment, we take the LDA model and optimize it with thecorrect number of topics according to our corpus through the topic optimizationexperiment described in the earlier section in this chapter. Once we have theoptimized number of topics then we feed our model into a loop where we haveset a threshold value of 0.97. So that means the iteration only stops when westart getting topics having a very high probability. LDA is a probabilistic modeland each time different topics will occur as we run the algorithm for multipletimes. It is important to know how good the topics are and also if they arehuman interpretable. The threshold value ranks the topics according to theirquality. Once we get the best topics then the iteration stops. After that, weget the coherence value from this model which significantly outperforms the otherones. It achieves 0.82 coherence which is the best of all and can be used as anyapplication for news classification. The topics are already described in the earliersection. The proposed model with the optimized number of topics works best forthe Bangla news corpus.4.7 Methodology for Classifying News CategoryWe have gone through a document classification according to topics. As we havea trained LDA model and also the topics, we wanted to go further with the firsttime news classification in Bangla language. A method is proposed for classifyingnews with the LDA model. At first the Document vs Topic matrix is extracted. Inthis matrix each of the term is tagged with a probability to belong to a topic. Theidea can be illustrated with a simple example. Let assume we have a documentD = Dogs and cats are cute and eventually become Dpreprocessed = [dogs, cats,cute]. As a human interpreter we can easily understand that this is a documentwith a topic - Animal. We may also consider that we have a topic k1 and k2.However the matrix for document vs term probability distribution can look likethis:∥∥∥∥∥∥∥∥[(0, p1) (0, p2)][(1, p3) (1, p4)][(2, p5) (2, p6)]∥∥∥∥∥∥∥∥Where p1, p2, ..., p6 are all the probabilities. 0, 1 and 2 are the word index fromour example sentence. For each word(p1 + p2) =(p3 + p4) =(p5 + p6) = 1So, in our proposed method we extract the mean of topic probability for each term.To make the example more generic let assume we have n terms and m number oftopics. So our matrix is as follows:∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥[(w1, p1) (w1, p2) .. (w1, p3)][(w2, p4) (w2, p5) . (w2, p6)][(wn, pm−2) (wn, pm−1) . (wn, pm)]∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥So the mean topic for each each word,mean1 =(p1...p3)/mmean2 =(p4...p6)/mmeanm =(pm−2 ...pm)/mFinally, the document belongs to the mean score having the largest value.The experiment is performed with two different type of news and explored howthis method works to find out the best topic that fits the data. Here is the resultfor these two news articles:In Figure 4.12 we have used a document about a movie and this is how the topicdistribution looks like. Topic 35 is the most relevant</s>
<s>topic for this news outper-forming the other topics in terms of their probability score. Now we will take alook at how this topic looks like with the English translation:(35, ‘0.026*“Cinema” + 0.020*“Premier” + 0.020*“Director” +0.019*“Announce” + 0.017*“Ending” + 0.016*“Actress” + 0.015*“Next movie”+ 0.015*“Release” + 0.014*“The movie” + 0.012*“Sensor”’)Since the news is about a movie, we get a very relevant answer from our model.It gives us a topic consisting about words related to movies. The first words is“Cinema” with the highest probability.1 2 3 4 5 6 7 8 9 10 11121314151617181920212223242526272829303132333435363738394041424344454647Topics0.000.010.020.030.040.050.06ilitTopic distribution for a documentFigure 4.12: Document topic distribution for movie news1 2 3 4 5 6 7 8 9 10 11121314151617181920212223242526272829303132333435363738394041424344454647Topics0.000.010.020.030.040.050.060.07ilitTopic distribution for a documentFigure 4.13: Document topic distribution for Trump news(8, ‘0.052*“Trump” + 0.044*“President” + 0.025*“USA” + 0.023*“Against” +0.020*“Administration” + 0.019*“Immigration” + 0.019*“Complaint” +0.017*“Foreign affairs” + 0.016*“Order” + 0.016*“Direct”’)In the second experiment we feed a news about Donald Trump talking about USAand immigrants facts illustrated in Figure 4.13. Then the model predicts it tobe more likely to belong to this above topic consisting the word “Trump” itselfwith the highest probability and leading to all the other immigration and the USArelated terms. So the experiment reflects the fact that we can classify Bangla newssuccessfully with this model.Chapter 5Conclusion and DiscussionIn this research, the scopes of the Bangla language in terms of the topic modellingand sentiment analysis have been demonstrated. Models are proposed for twodifferent tasks that has never been applied on Bangla. Sentiment analysis andtopic modelling with Bangla are both comparatively new in terms of the techniquesand algorithms that have been applied in this thesis. News classification on Banglahas a greater impact on the print and news media. Bangladesh being a countryof people who love internet and most of them using Bangla in the virtual world,the amount of data in Bangla text is ever growing. Over the past decade, thisamount has grown dramatically. Starting from the Facebook to Twitter to othersocial sites there are various online businesses going on in Bangladesh. Thesebusinesses are mostly dependent on Facebook. They have Facebook pages andgroups. People comment and share their opinions on these pages. The best wayto keep up with a good business is to always understand and value customer’sopinion. For this, sentiment analysis is important.The state of the art NLP technologies when used in proper ways, insights ofmillions of various brand conversations with accuracy close to human level can beachieved. The proposed models will help to automate the topic understanding ofa data collection as well as understand the sentiments. Otherwise, reading eachof the document manually and understanding their semantic meanings will takeforever for humans when it comes to large dataset. Eventually, Bangla textualdata will turn into a big data as the amount is growing rapidly in the world wideweb. So, the necessity of automation and deployment of these state of the artalgorithms for Bangla language can not be ignored. This research took the leadto explore and demonstrate how these algorithms and the proposed models can beeffectively used for both research and</s>
<s>industry applications.The first contribution of this thesis is the sentiment analysis with the Bangla lan-guage and the results are better than the existing models. The second contributionof the thesis is the first ever topic modelling with Bangla for news classificationusing the LDA algorithm. Topic modelling has a broad opportunity in terms ofBangla language. Bangla language is still lagging far behind due to it’s scarce re-sources. From the implementation point of view many challenges exist for Banglasince there are no established libraries for the language. However, a model is pro-posed in this thesis regarding topic modelling and a comparison of the model withthe other algorithms is also demonstrated. It is demonstrated that the proposedmodel built with the LDA algorithm works better than the other models. A newsclassification system with Bangla is demonstrated which has never been exploredfrom the LDA perspective. Document similarity is also experimented with thevarious algorithms and compared the models. Topic modelling with Bangla has avarious application starting from the news companies to the online markets etc.In this research it is demonstrated that the topic modelling can be extended withthe Bangla language. Bangla having a strong online presence over the past fewyears, there is a lot more to do with the language regarding topic modelling andother NLP tasks. Online Bangla libraries can use this tool as a recommendersystem. This work can also be extended with a trending-topic task. Topic trendsis an important field in NLP which can be applied in Bangla language since thereare tons of data being generated every day and these big amount of dataset canbe analyzed in order to get an insight of the economy of Bangladesh which hasrecently been considered as the “Rising Tiger” of the south east Asia. The readymade garment (RMG) industry generates a big amount of textual data which canbe used for topic modelling as well as topic trend analysis. The RMG sector is themain powerhouse of Bangladesh’s economy. It is yet to be explored for the topicmodelling task. Trending topics can play a vital role in predicting the corruptionrate over different districts in Bangladesh by applying LDA to the local newspapers from different districts and focusing on crime news. Moreover, it can bestretched to use in public sentiment analysis for prediction over diverse aspectsthrough the print and news media.From the technical points of view, Gensim, a python library is used for the topicmodelling. Gensim is a powerful tool for any topic modelling task. The mainchallenge for this research was to achieve the dataset due to the lack of publiclyavailable data. So, I crawled all the data from the online newspapers and organizedthem which was time consuming. Countrys largest online newspaper the daily“Prothom Alo” was scrapped and organized the data chronologically. An extralayer of cleaning is applied to the dataset to remove some insignificant words thosewere not removed during the stop words removal stage. Some of the extreme rarewords were considered as the stop words since they had no effect on the result.Significantly frequent words were removed too. Stemming can further be</s>
<s>appliedalthough LDA algorithm can extract the word to word semantic relationship forwhich it is not necessary for topic modelling. Bigram is used for this thesis. Inthe future trigrams and other n-grams can also be applied to see the effect on theresults.The other contribution of the thesis is the sentiment analysis with a RNN. Apartfrom a character level model, a word level RNN was proposed in the respectivechapter. A comparison of these two results were shown. How the character levelencoding outperforms the word level model is also discussed. In this model thereare scopes to address the sentence scoring methods. There are different ways todevelop a sentence scoring system and also apply the LDA algorithm for sentimentanalysis. Identifying frequent words and their relation to the sentences may leadto better results in terms of sentiment analysis.To conclude, this thesis offers a research based study on a character-level RNNfor sentiment analysis in Bangla. However, the model is not a generic of it’s kindsince it worked well with data from a specific domain. Also, sarcastic sentencedetection is not addressed in the model. So, if a positive word is used in thesentence with a negative sarcastic meaning, the model will not be able to detectthis. Hence, this needs to be addressed which is a challenge due to the level ofabstraction an user can create through one sentence. Intensive research is neededin this regard. Analysis from this thesis shows that the character-level RNN is aneffective method to extract the sentiment from Bangla. Making the model morereliable across different data is one future goal of this thesis that will make ituseable in the industries.Bibliography[1] Shaika Chowdhury and Wasifa Chowdhury. Performing sentiment analysis inBangla microblog posts. In proc. of International Conference on Informatics,Electronics & Vision (ICIEV), pages 1–6. IEEE, 2014.[2] Rubayyi Alghamdi and Khalid Alfalqi. A survey of topic modeling in textmining. International Journal of Advanced Computer Science and Applica-tions. (IJACSA), Volume 6, 2015.[3] Zhou Tong and Haiyi Zhang. A text mining research based on LDA topicmodelling. Technical report, Jodrey School of Computer Science, Acadia Uni-versity, Wolfville, NS, Canada, 10:201–210, 2016.[4] Mhamud Murad. Dhaka ranked second in number of active facebookusers, April, 2017. URL https://bdnews24.com/bangladesh/2017/04/15/dhaka-ranked-second-in-number-of-active-facebook-users.[5] David Newman, Jey Han Lau, Karl Grieser, and Timothy Baldwin. Auto-matic evaluation of topic coherence. In proc. of Human Language Technolo-gies: The 2010 Annual Conference of the North American Chapter of theAssociation for Computational Linguistics, pages 100–108. Association forComputational Linguistics, 2010.[6] Shiry Ginosar and Avital Steinitz. Sentiment Analysis using NaiveBayes. Technical report, University of California, Berkeley, 2012.URL http://bid.berkeley.edu/cs294-1-spring12/images/4/4b/AvitalSteinitz_ShiryGinosar.pdf.[7] Asif Hassan, Mohammad Rashedul Amin, Abul Kalam Al Azad, and NabeelMohammed. Sentiment analysis on Bangla and romanized Bangla text usingdeep recurrent models. In proc. of International Workshop on ComputationalIntelligence (IWCI), pages 51–56. IEEE, 2016.https://bdnews24.com/bangladesh/2017/04/15/dhaka-ranked-second-in-number-of-active-facebook-usershttps://bdnews24.com/bangladesh/2017/04/15/dhaka-ranked-second-in-number-of-active-facebook-usershttp://bid.berkeley.edu/cs294-1-spring12/images/4/4b/AvitalSteinitz_ShiryGinosar.pdfhttp://bid.berkeley.edu/cs294-1-spring12/images/4/4b/AvitalSteinitz_ShiryGinosar.pdfBibliography 77[8] Amitava Das and Sivaji Bandyopadhyay. Phrase-level polarity identificationfor bangla. International Journal of Computational Linguistics and Applica-tons (IJCLA), 1(1-2):169–182, 2010.[9] Amitava Das and Sivaji Bandyopadhyay. Sentiwordnet for bangla. KnowledgeSharing Event-4: Task, 2, 2010.[10] Sanjida Akter and Muhammad Tareq Aziz. Sentiment analysis on Facebookgroup using lexicon based approach. In proc. of International Conference</s>
<s>onElectrical Engineering and Information Communication Technology (ICEE-ICT), pages 1–4. IEEE, 2016.[11] Xiang Zhang, Junbo Zhao, and Yann LeCun. Character-level convolutionalnetworks for text classification. In proc. of Advances in neural informationprocessing systems, pages 649–657. Curran Associates, Inc., 2015.[12] David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learningrepresentations by back-propagating errors. Nature, 323(6088):533–536, 1986.[13] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. NeuralComputation, 9(8):1735–1780, 1997.[14] Yoshua Bengio, Patrice Simard, and Paolo Frasconi. Learning long-term de-pendencies with gradient descent is difficult. IEEE Transactions on NeuralNetworks, 5(2):157–166, 1994.[15] Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio.Empirical evaluation of gated recurrent neural networks on sequence model-ing. arXiv preprint arXiv:1412.3555, 2014.[16] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Rus-lan Salakhutdinov. Dropout: A simple way to prevent neural networks fromoverfitting. The Journal of Machine Learning Research, 15(1):1929–1958,2014.[17] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic opti-mization. arXiv preprint arXiv:1412.6980, 2014.[18] David M. Blei, Andrew Y. Ng, and Michael I. Jordan. Latent dirichlet allo-cation. Journal of Machine Learning Research, 3(Jan):993–1022, 2003.Bibliography 78[19] David M. Blei and John D. Lafferty. Topic models. Text mining: Classifica-tion, Clustering and Applications, 10(71):34–57, 2009.[20] ASM Ashique Mahmood. Literature survey on topic modeling. Technicalreport, Dept. of CIS, University of Delaware Newark, Delaware, 2009.[21] Sheikh Abujar, Mahmudul Hasan, MSI Shahin, and Syed Akhter Hossain.A Heuristic Approach of Text Summarization for Bengali Documentation.In proc. of 8th International Conference on Computing, Communication andNetworking (8th ICCCNT), pages 1–7. IEEE, 2017.[22] Shivam Bansal, Sunil Ray, Pranav Dar, and Tavish Srivas-tava. Beginners guide to topic modeling in python, Aug2016. URL https://www.analyticsvidhya.com/blog/2016/08/beginners-guide-to-topic-modeling-in-python/.[23] Leticia H. Anaya. Comparing Latent Dirichlet Allocation and Latent SemanticAnalysis as Classifiers. ERIC, 2011.[24] Sonia Bergamaschi and Laura Po. Comparing LDA and LSA Topic Modelsfor Content-Based Movie Recommendation Systems. In proc. of InternationalConference on Web Information Systems and Technologies, pages 247–263.Springer, 2014.[25] Yee W. Teh, Michael I. Jordan, Matthew J. Beal, and David M. Blei. Sharingclusters among related groups: Hierarchical Dirichlet processes. In proc. ofAdvances in neural information processing systems, pages 1385–1392, 2005.[26] Michael E. Wall, Andreas Rechtsteiner, and Luis M. Rocha. Singular valuedecomposition and principal component analysis. In proc. of A practical ap-proach to microarray data analysis, pages 91–109. Springer, 2003.[27] Steven P. Crain, Ke Zhou, Shuang-Hong Yang, and Hongyuan Zha. Dimen-sionality reduction and topic modeling: From latent semantic indexing tolatent dirichlet allocation and beyond. In Mining Text Data, pages 129–161.Springer, 2012.[28] News classification with topic models in gensim, June, 2016. URLhttps://markroxor.github.io/gensim/static/notebooks/gensim_news_classification.html#topic=0&lambda=1&term=.https://www.analyticsvidhya.com/blog/2016/08/beginners-guide-to-topic-modeling-in-python/https://www.analyticsvidhya.com/blog/2016/08/beginners-guide-to-topic-modeling-in-python/https://markroxor.github.io/gensim/static/notebooks/gensim_news_classification.html#topic=0&lambda=1&term=https://markroxor.github.io/gensim/static/notebooks/gensim_news_classification.html#topic=0&lambda=1&term=Bibliography 79[29] Jonathan Chang, Sean Gerrish, Chong Wang, Jordan L. Boyd-Graber, andDavid M. Blei. Reading tea leaves: How humans interpret topic models. Inproc. of Advances in neural information processing systems, pages 288–296,2009.[30] David Mimno, Hanna M. Wallach, Edmund Talley, Miriam Leenders, andAndrew McCallum. Optimizing semantic coherence in topic models. In proc.of the conference on empirical methods in natural language processing, pages262–272. Association for Computational Linguistics, 2011.[31] Denny Britz. Recurrent neural networks tutorial, part 1 intro-duction to rnns, Jul 2016. URL http://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnns.[32] Yan and Shi. Understanding lstm and its diagramsml re-viewmedium, Mar 2016. URL https://medium.com/mlreview/understanding-lstm-and-its-diagrams-37e2f46f1714.[33] Kyunghyun Cho, Bart Van Merriënboer,</s>
<s>Dzmitry Bahdanau, and YoshuaBengio. On the properties of neural machine translation: Encoder-decoderapproaches. arXiv preprint arXiv:1409.1259, 2014.[34] Jesse Weaver and Paul Tarjan. Facebook linked data via the graph api.Semantic Web, 4(issue 3):245–250, 2013.[35] Boris Kovalerchuk. Interactive visual classification, clustering and dimensionreduction with glc-l. In Visual Knowledge Discovery and Machine Learning,pages 173–216. Springer, 2018.[36] Tokenization, April, 2009. URL https://nlp.stanford.edu/IR-book/html/htmledition/tokenization-1.html.[37] Michael John Collins. A new statistical parser based on bigram lexical depen-dencies. In proc. of the 34th annual meeting on Association for ComputationalLinguistics, pages 184–191. Association for Computational Linguistics, 1996.http://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnnshttp://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnnshttps://medium.com/mlreview/understanding-lstm-and-its-diagrams-37e2f46f1714https://medium.com/mlreview/understanding-lstm-and-its-diagrams-37e2f46f1714https://nlp.stanford.edu/IR-book/html/htmledition/tokenization-1.htmlhttps://nlp.stanford.edu/IR-book/html/htmledition/tokenization-1.html</s>
<s>Identifying and Categorizing Opinions Expressed in Bangla Sentences using Deep Learning TechniqueSee discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/340678545Identifying and Categorizing Opinions Expressed in Bangla Sentences usingDeep Learning TechniqueArticle in International Journal of Computer Applications · April 2020DOI: 10.5120/ijca2020920119CITATIONSREADS3 authors, including:Moqsadur RahmanShahjalal University of Science and Technology3 PUBLICATIONS 0 CITATIONS SEE PROFILESummit HaqueShahjalal University of Science and Technology5 PUBLICATIONS 0 CITATIONS SEE PROFILEAll content following this page was uploaded by Moqsadur Rahman on 19 April 2020.The user has requested enhancement of the downloaded file.https://www.researchgate.net/publication/340678545_Identifying_and_Categorizing_Opinions_Expressed_in_Bangla_Sentences_using_Deep_Learning_Technique?enrichId=rgreq-0e9ace4ee41af45d071d258668fc166c-XXX&enrichSource=Y292ZXJQYWdlOzM0MDY3ODU0NTtBUzo4ODE5NjU4MDA2MzY0MThAMTU4NzI4ODQ0MTc0Nw%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/340678545_Identifying_and_Categorizing_Opinions_Expressed_in_Bangla_Sentences_using_Deep_Learning_Technique?enrichId=rgreq-0e9ace4ee41af45d071d258668fc166c-XXX&enrichSource=Y292ZXJQYWdlOzM0MDY3ODU0NTtBUzo4ODE5NjU4MDA2MzY0MThAMTU4NzI4ODQ0MTc0Nw%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-0e9ace4ee41af45d071d258668fc166c-XXX&enrichSource=Y292ZXJQYWdlOzM0MDY3ODU0NTtBUzo4ODE5NjU4MDA2MzY0MThAMTU4NzI4ODQ0MTc0Nw%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Moqsadur_Rahman?enrichId=rgreq-0e9ace4ee41af45d071d258668fc166c-XXX&enrichSource=Y292ZXJQYWdlOzM0MDY3ODU0NTtBUzo4ODE5NjU4MDA2MzY0MThAMTU4NzI4ODQ0MTc0Nw%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Moqsadur_Rahman?enrichId=rgreq-0e9ace4ee41af45d071d258668fc166c-XXX&enrichSource=Y292ZXJQYWdlOzM0MDY3ODU0NTtBUzo4ODE5NjU4MDA2MzY0MThAMTU4NzI4ODQ0MTc0Nw%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Shahjalal_University_of_Science_and_Technology?enrichId=rgreq-0e9ace4ee41af45d071d258668fc166c-XXX&enrichSource=Y292ZXJQYWdlOzM0MDY3ODU0NTtBUzo4ODE5NjU4MDA2MzY0MThAMTU4NzI4ODQ0MTc0Nw%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Moqsadur_Rahman?enrichId=rgreq-0e9ace4ee41af45d071d258668fc166c-XXX&enrichSource=Y292ZXJQYWdlOzM0MDY3ODU0NTtBUzo4ODE5NjU4MDA2MzY0MThAMTU4NzI4ODQ0MTc0Nw%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Summit_Haque?enrichId=rgreq-0e9ace4ee41af45d071d258668fc166c-XXX&enrichSource=Y292ZXJQYWdlOzM0MDY3ODU0NTtBUzo4ODE5NjU4MDA2MzY0MThAMTU4NzI4ODQ0MTc0Nw%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Summit_Haque?enrichId=rgreq-0e9ace4ee41af45d071d258668fc166c-XXX&enrichSource=Y292ZXJQYWdlOzM0MDY3ODU0NTtBUzo4ODE5NjU4MDA2MzY0MThAMTU4NzI4ODQ0MTc0Nw%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Shahjalal_University_of_Science_and_Technology?enrichId=rgreq-0e9ace4ee41af45d071d258668fc166c-XXX&enrichSource=Y292ZXJQYWdlOzM0MDY3ODU0NTtBUzo4ODE5NjU4MDA2MzY0MThAMTU4NzI4ODQ0MTc0Nw%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Summit_Haque?enrichId=rgreq-0e9ace4ee41af45d071d258668fc166c-XXX&enrichSource=Y292ZXJQYWdlOzM0MDY3ODU0NTtBUzo4ODE5NjU4MDA2MzY0MThAMTU4NzI4ODQ0MTc0Nw%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Moqsadur_Rahman?enrichId=rgreq-0e9ace4ee41af45d071d258668fc166c-XXX&enrichSource=Y292ZXJQYWdlOzM0MDY3ODU0NTtBUzo4ODE5NjU4MDA2MzY0MThAMTU4NzI4ODQ0MTc0Nw%3D%3D&el=1_x_10&_esc=publicationCoverPdfInternational Journal of Computer Applications (0975 – 8887) Volume 176 – No. 17, April 2020 Identifying and Categorizing Opinions Expressed in Bangla Sentences using Deep Learning Technique Moqsadur Rahman Dept. of CSE Shahjalal University of Science and Tech., Sylhet Summit Haque Dept. of CSE Shahjalal University of Science and Tech., Sylhet Zillur Rahman Saurav Dept. of CSE Shahjalal University of Science and Tech., Sylhet ABSTRACT Identifying and categorizing opinions in a sentence is the most prominent branch of natural language processing. It deals with the text classification to determine the intention of the author of the text. The intention can be for the presentation of happiness, sadness, patriotism, disgust, advice, etc. Most of the research work on opinion or sentiment analysis is in the English language. Bengali corpus is increasing day by day. A large number of online News portals publish their articles in Bengali language and a few News portals have the comment section that allows expressing the opinion of people. Here a research work has been done on Bengali Sports news comments published in different newspapers to train a deep learning model that will be able to categorize a comment according to its sentiment. Comments are collected and separated based on immanent sentiment. The deep learning algorithms that have been used are Convolutional Neural Network (CNN), Multilayer Perceptron, Long Short-Term Memory (LSTM). General Terms Sentiment Analysis, Deep Learning, Emotion Classification Keywords CNN, LSTM, ROC curve, Confusion Matrix, Performance Analysis 1. INTRODUCTION Sentiment analysis is the process of determining the emotional biases behind a series of words, used to achieve an understanding of the attitudes, opinions, and emotions expressed within an online mention. In general, sentiment analysis aims to determine the attitude of a speaker, writer or another subject concerning some topic or the overall contextual polarity or emotional reaction to a document, interaction or event. Sentiments are inherently subjective. Different people may interpret the attitude of the same text differently. It is extremely useful in social media monitoring as it allows us to find an overview of the wider public opinion behind certain topics. The Internet is the global platform to share opinions. Thousands of people share their opinion on social media and blogs, post reviews on different products and services. The online news portal publishes its articles in different categories. In such a way, online text content is growing rapidly. People need to decide by analyzing product reviews, news articles and social media posts and so on. It is also important</s>
<s>to know individual journalists and columnist's opinions and public opinion on an issue like sports, business and trade, international, national and some other important issues. It is hard to analyze the opinions and feelings of huge news content manually. Sentiment Analysis focuses on determining whether a piece of text is actually. It is also known as opinion mining, extracting the opinion or feelings of a writer. It is a process to detect someone's attitude towards a particular product or service. Sentiment analysis is important to make decisions on topics like politics, sports, financial condition, product reviews and so on judgmental issues. Humans are subjective and that's why opinions are important. Business organizations, Government, Service-providing organizations, Sports lover analyze public opinions to identify opportunities, make a decision and make progress. The buyer wants to read reviews when to buy a product, Sports organization need to know public opinion to know the expectation of the audience, Government needs to know public opinion before making policy and even psychological investigation requires sentiment information. So, decision-makers are being depended on using the content of online media like-news, reviews, micro-blogs, and postings on social sites. A newspaper shows us the current happenings of the world and can be called a mirror of society. The sports section of news portals shows news about Football, Cricket, Tennis, Hockey, sports club, sports team leaders and other players, score and sports issues. People read sports news to know the activity of sports scores, player's activities, sports organization policy, aim to achieve the goal in the future, local or national or international matters that will affect them. Some online news portals have comments sections where people can express their opinion that is helpful to know about other's opinion on a specific topic. Sentiment analysis can find the polarity of public opinion towards any topic. This study may help people to find current the condition of sports issues of any country and time. People have different opinions in different situations. So, their expressions are different depending on the news they like or dislike. In Bengali news, people may share their comments as they like it most, some don‟t like it, some become worried and sad about it, some like to advise on it, some become angry and so on. So, there may be a huge amount of opinions expressed by people in Bengali news. In this paper, all those opinions are categorized by collecting those comments as data and labeling them as the people want to express. There are so many opinions or sentiment in Bengali comments like happiness ( ), sadness ( ), desire ( ), compliment ( ), advice ( ), patriotism ( ), annoyance ( ), etc. From those sentiments, only four chosen for this work. They are happiness ( ), sadness ( ), advice ( ) and annoyance ( ). When some comment indicates any of the above four it means that comment or sentence have an opinion expressed by the people. When no opinions are implied</s>
<s>from the above four, International Journal of Computer Applications (0975 – 8887) Volume 176 – No. 17, April 2020 we like to label them as neutral ( ). So comments or data are labeled using five classes after collected. Various statistical and linguistic techniques have been developed for Sentiment analysis. All these methods are applied to the English language and there is a huge scope to work with the Bengali language. In this investigation, Neural Network models are used to detect the sentiment of Bengali sports news comments. 2. RELATED PREVIOUS WORKS Identifying and categorizing opinions expressed in Bangla sentences is one of the common text classification problems and researches have been conducted on sentiment analysis on Bengali text. Das & Bandyopadhyay presented two different approaches for identifying emotion holders from Bengali sentences. In this work, the first approach, the baseline model, is developed based on the combinations of various part-of-speech (POS) features extracted from the phrase-based similarities and the second approach, syntactic model, is based on the argument structure of the sentences concerning the verbs. They have tested Bangla text with cosine similarities using TF-IDF, Naive Bayes with POS tagger, stemmer. Some of them have worked on news article sentiment analysis [1]. The Centre for Research on Bangla Language Processing of BRAC University has conducted some works. They proposed a way to implement a corpus by collecting data from online resources [2]. They implemented a POS tagger based on HMM, n-gram and Brill's tagger [3]. The result is analyzed with a small corpus of 5000 words giving an accuracy of only 55%. Md. Zahurul Islam and Naushad UzZaman present the compilation methodology and some statistical by observing the typical behavior of Zipf's curve for Bangla news corpus-“Prothom Alo corpus”, which is the first of its kind for Bangla [4]. Muhammad Mahmudun Nabi, Tanzir Altaf and Sabir Ismail (2016) used feature sets and supervised classifiers and proposed a method to recognize the sentiment or opinion and extract a unique feature to come out a better approach to understanding sentiment from Bangla text using [5]. Horoscopes consist of future predictions for each of the twelve zodiac signs and are very popular in India. Tirthankar Ghosal and Sajal K. Das mainly focus on the sentiment analysis of Bengali daily horoscope using SVM with unigram features on the paper. They are given the positive and negative emotional basis of the sentence by crawling a leading Bengali newspaper's daily horoscope section [6]. MaxEnt and SVM algorithms are compared in paper [7] for sentiment analysis on Bengali microblog posts with different feature extraction methods and it gets the best performance with SVM with unigram and emoticons as features. Also, sentiment analysis is done on the Bengali horoscope corpus in paper [8] using ML algorithms. NB, SVM, K-Nearest Neighbors (NN), Decision Trees (DT), and Random Forest (RF). Among those SVM has the best performance with unigram features. In paper [9] M. Trivedi, N. Soni, S. Sharma, and S. Nair shows how Support Vector Machine (SVM) and</s>
<s>Naive Bayes (NB) algorithms are compared for text classification where SVM outperforms. S. Z. Mishu and S. M. Rafiuddin‟s Paper [10] compare text classification performance on different supervised ML models where back-propagation based Neural Network has the best performance. 3. DATASET To start a Sentiment Analysis process, it is always required to build a sentiment lexicon and annotated data for machine learning. The main source of raw data of the corpus is web content. Online portals of more than 24 Bengali newspapers1 are investigated for raw data and it has been found that Prothom Alo2 has a huge collection of visitor's comments. So raw data of the corpus is collected from the popular Bengali newspaper Prothom-Alo. After collecting all the raw data some data annotation campaigns were arranged where some students from the age range 19 to 25 annotated the data in five categories: happiness ( ), sadness ( ), advice ( ), annoyance ( ) and neutral ( ). Here in the table 1 a brief statistics about the dataset is given, Table 1. Dataset in brief Source www.prothomalo.com Size 2492 sentences Average word per sentence 8.3 Max word in a sentence 25 Figure 1. Number of data per class In Figure. 1 we can get a brief overview of the number of data for each of the target classes. We can see that the dataset is quite imbalanced as the number of data in the "Depression" section is below 300 where all other classes have more than 500 data and the class labeled "Neutral" has more than 600 data. But since the dataset is human-annotated and all the labeling is done according to the sentiment of those sentences so nothing is done to maintain the balance. 4. DEEP NEURAL NETWORKS Some traditional deep neural networks have been used in this experiment. 4.1 CNN For CNN's strong ability to learn relevant features, its usage not only confined to image recognition, now it's heavily being used in NLP tasks. Different types of CNN have been used in this experiment. Here 1-D CNN, 1-D parallel CNN layers http://www.24livenewspaper.com/bangla-newspaper http://www.prothom-alo.com International Journal of Computer Applications (0975 – 8887) Volume 176 – No. 17, April 2020 with bi-gram, tri-gram, tetra-gram filters are used. A demo example of 1-D CNN with parallel layers where the embedding dimension is six is given in Figure 2. But in the experiments, different embedding dimensions have been tried and the best result is found using embedding dimension 200. Figure 2. CNN with parallel layers 4.2 LSTM RNNs are very efficient in capturing long-distance dependencies, and one of its variant LSTM solved its vanishing gradient problem [11]-[13]. LSTM cells carry past information through themselves. Each cell has different gates to interact with the data passing through it. LSTMs cell can update, remove the portion of data using the gates. A schematic design of a basic LSTM cell is given in Figure 3. The equations used by an LSTM cell to perform operations are given below. f t = σ</s>
<s>(Wf . [ ht-1, xt ] + bf) (1) it = σ (Wi . [ ht-1, xt ] + bi) (2) = tanh(Wc . [ht-1, xt] + bc) (3) Ct = ft * Ct-1 + it * (4) ot = σ (Wo . [ht-1, xt] + bo) (5) ht = ot * tanh(Ct) (6) Figure 3. Basic LSTM cell 5. RESULTS & ANALYSIS Deep learning models are applied to the dataset. Scikit-learn3 train-test split API has been used for splitting the dataset into train and test set and the ratio was 80 to 20. As it is preferred to use ROC curves measures for the imbalanced dataset, authors have principally used the F1 measures, ROC area measures for the datasets. ROC curve has a false positive rate on the X-axis and a True positive rate on the Y-axis. The rates were calculated using equation 7 and equation 8. TPR = TP / (TP + FN) (7) FPR = FP / (FP + TN) (8) Here TPR, FPR, TP, FN, FP, TN stand for True Positive Rate, False Positive Rate, True Positive, False Negative, False Positive, True Negative respectively. Experimenting with deep learning models on the dataset, some significant calculations are performed that are used as matric for indicating the performance of those models. In Table 2 we will find F1 score, precision, and recall of all the three models. Table 2. Performance measurement of all models Model F1 score Precision Recall CNN 0.4819 0.4863 0.4819 LSTM 0.4658 0.4617 0.4658 DNN 0.4236 0.4319 0.4236 From the performance table, we can see that in every section the CNN model beats the other two models. This can be explained in the way that, LSTM works well in the scenario where sentences are long enough. In this dataset, the average word length is 8. So CNN worked here better than LSTM. So further analysis was continued on the CNN model. The ROC curve generated for the CNN model is shown in Figure 4. Figure 4. ROC curve for CNN Confusion matrix for CNN model is shown in Figure 5. 3 http://scikit-learn.org/ International Journal of Computer Applications (0975 – 8887) Volume 176 – No. 17, April 2020 Figure 5. Confusion matrix for CNN Again we can see that CNN outperforms two other models. The ROC curve generated for the CNN model implemented considering two classes is depicted in Figure 6. Figure 6. ROC curve for CNN (2 Classes) But the fact is the overall result is not that satisfactory and not pass the standard and it is obvious because the used dataset is not that large. In a dataset like this classification according to 5 classes is a Panglossian task. So the dataset was decided to be divided into 2 classes; Happiness and Sadness. The same models are implemented in the newly divided dataset. In Table 3 we will find F1 score, precision, and recall of all the three models. Table 3. Performance measurement of all models (2 classes) Model F1 score Precision Recall CNN</s>
<s>0.7557 0.7598 0.7530 LSTM 0.7117 0.7093 0.7168 DNN 0.6867 0.6862 0.7048 The confusion matrix calculated for the CNN model implemented considering two classes is given in figure 7. Figure 7. Confusion matrix for CNN (2 classes) Accuracy of all the models for both 5 and 2 classes experiments are given in figure 8. Figure 8. Accuracy comparison of all models It clearly shows that the performance of all the models make a huge jump when they are implemented on less number of classes. It happens because the number of data per class increases for less number of classes. And this also indicates that our model would work fine also in five classes dataset if the number of data was large enough. 5.1 HINDI DATASETS Since the dataset was not that large enough to prove the performance of the models, the models were further implemented on two Hindi datasets. That time those models also performed consistently and with better accuracy than the CNN-SVM approach done by this work [14]. The accuracies found by this work on 'HS1' and 'HS2' were 57.34 and 44.88. We have found CNN outperformed the existing work [14]. Both the F1 scores and accuracies achieved by CNN are given in Table 4. Table 4. F1 and accuracy measures on Dataset of Hindi language Model F1 (HS1) (HS2) Accuracy (HS1) Accuracy (HS2) CNN 0.65 0.47 67.25 54.10 International Journal of Computer Applications (0975 – 8887) Volume 176 – No. 17, April 2020 The ROC curves for the HS1 and HS2 are given in Figure 9 and Figure 10 respectively. Figure 9. ROC curve for dataset HS1 6. CONCLUSION Sentiment analysis is a very active research area of NLP. It has a great impact on politics, business, and other social sectors. In this work, different deep neural network architectures are analyzed for sentiment analysis from comments on Bengali Sports News. The type of classification was different here. Authors were more specific about emotions hidden inside a sentence. And a promising model has been established that can classify a sentence according to hidden emotion. For checking the consistency of the model it is also tested using two datasets of Hindi language. It is found that the model worked fine also on those datasets. Figure 10. ROC curve for dataset HS2 7. REFERENCES [1] Das, D., and Bandyopadhyay, S. 2010. Finding emotion holder from Bengali blog texts -An unsupervised syntactic approach. PACLIC 24 - Proceedings of the 24th Pacific Asia Conference on Language, Information and Computation. [2] Sarkar, Iqbal, A., Pavel, D. S. H., and Khan, M. 2007, Automatic Bangla corpus creation. Center for research on Bangla language processing (CRBLP), BRAC University. [3] Hasan, Muhammad, F., UzZaman, N., and Khan, M. 2007. Comparison of different POS Tagging Techniques (N-Gram, HMM and Brill’s tagger) for Bangla. Advances and Innovations in Systems, Computing Sciences and Software Engineering. [4] Majumder, K. M. Y. A., Islam, M. Z., UzZaman, N., and Khan, M. Analysis of and Observations from a Bangla news Corpus. Center for Research</s>
<s>on Bangla Language Processing, BRAC University, Dhaka, Bangladesh. [5] Nabi, M. M., Altaf, M. T., and Ismail, S. 2016. Detecting Sentiment from Bangla Text using Machine Learning Technique and Feature Analysis International Journal of Computer Applications (0975 – 8887) Volume 153 – No 11. [6] Ghosal, T., Das, S. K., and Bhattacharjee, S. Sentiment Analysis on (Bengali Horoscope) Corpus. [7] Chowdhury, S., and Chowdhury, W. 2014. Performing sentiment analysis in Bangla microblog posts. Int. Conf. Informatics, Electron. Vision, ICIEV [8] Ghosal, T., Das, S. K., and Bhattacharjee, S. 2016. Sentiment analysis on (Bengali horoscope) corpus. 12th IEEE Int. Conf. Electron. Energy, Environ. Commun. Comput. Control (E3-C3), INDICON 2015, pp. 1–6. [9] Trivedi, M., Soni, N., Sharma, S., and Nair, S. 2015. Comparison of Text Classification Algorithms. International Journal of Engineering Research & Technology (IJERT). vol. 4, no. 2, pp. 334–336. [10] Mishu, S. Z., and Rafiuddin, S. M. 2016. Performance Analysis of Supervised Machine Learning Algorithms for Text Classification. 201619th Int. Conf. Comput. Inf. Technol., pp. 409–413. [11] Paul, Kumar, A., and Shill, P. C. 2016. Sentiment mining from Bangla data using mutual information. Electrical, Computer & Telecommunication Engineering (ICECTE), International Conference on. IEEE, 2016. [12] Islam, M. S., Islam, M. A., Hossain, M. A., and Dey, J. J. 2016. Supervised approach of sentimentality extraction from bengali facebook status. In Computer and Information Technology (ICCIT), 2016 19th International Conference on, pp. 383-387. IEEE. [13] Islam, M. S., Al-Amin, M., and Uzzal, S. D. 2016. Word embedding with hellinger PCA to detect the sentiment of bengali text. Computer and Information Technology (ICCIT), 2016 19th International Conference on. IEEE. [14] Akhtar, M. S., Kumar, A., Ekbal, A., and Bhattacharyya, P. 2016. A hybrid deep learning architecture for sentiment analysis. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pp. 482-493. IJCATM : www.ijcaonline.org View publication statsView publication statshttps://www.researchgate.net/publication/340678545</s>
<s>Employing Machine Learning techniques on Sentiment Analysis of Google Play Store Bangla Reviews2019 22nd International Conference on Computer and Information Technology (ICCIT),18-20 December 2019Employing Machine Learning techniques onSentiment Analysis of Google Play Store BanglaReviewsMd Muhtasim Jawad Soumikdept. of Computer Science and EngineeringAhsanullah University of Science and TechnologyDhaka, Bangladeshmuhtasimjawad007@gmail.comSyed Salvi Md Farhavidept. of Computer Science and EngineeringAhsanullah University of Science and TechnologyDhaka, Bangladeshsalvifaravi111@gmail.comFarzana Evadept. Computer Science and EngineeringAhsanullah University of Science and TechnologyDhaka, Bangladeshfarzana0023@gmail.comTonmoy Sinhadept. Computer Science and EngineeringAhsanullah University of Science and TechnologyDhaka, Bangladeshtonmoy2101@gmail.comMohammad Shafiul Alamdept. Computer Science and EngineeringAhsanullah University of Science and TechnologyDhaka, Bangladeshshafiul.cse@aust.eduAbstract—This article offers an in-depth insight on anumber of existing methodologies to perform sentimentanalysis using text classification on Bangla dataset.Although the rapidly developing machine learning al-gorithms are showing promising results, the viabilityof those methods for non-English languages such asBangla is yet to be fully explored. This research aimsto fill in some of those existing research gaps throughproper implementation of machine learning techniqueswhere words are converted into feature vectors viaimplementation of TF-IDF algorithm on data crawledfrom Google play store, the largest Android applicationmarket. Many significant algorithms staring linear al-gorithms like Naïve Bayes, Linear Support Vector Ma-chine (SVM) are implemented. An in-depth compari-son is also made among the results of various existingalgorithms. The experimental results indicate that eventhe base-line algorithms, after proper pre-processing,can show promising results on our Bangla dataset.Naïve Bayes, Support Vector Machine and LogisticRegression has shown very promising results (accuracyscore of 0.75 on average) even with the data limitation.An Ensemble method is also proposed with AdaptiveBoosting technique showing an accuracy score of 0.7639with five-fold applied. SVM has the best accuracy scoreof 0.7648 among all the algorithms when five-fold isapplied and Gradient Boosting has the best accuracyscore of 0.7695 when five-fold is not applied.Index Terms—sentiment analysis, natural languageprocessing, machine learning, google play store, ensem-ble methodI. IntroductionTo be one step ahead of others in this is highly com-petitive world of marketing, it is an absolute necessity tounderstand user sentiment towards a specific product andremodel/refine it accordingly. As the main goal of anyproduction company is to sell more and more productsto the target buyers/audience, what users want holds themost importance to them. Today users are constantlylooking for products to match their ever-changing tastewhich is more true for an open market like the marketfor android apps also known as Google Play Store. As itstands, most of the applications are free to download andusers can get vast number of choices for a specific typeof application which means more and more competitionamong the developers which again brings us to importanceof the quality control for the applications, understandinguser sentiment and updating applications accordingly.Every application page on the Google play store has acomment section where the users convey their constructivecriticism. Now it is rather obvious that only a human maycome close to realizing the sentiment of another humanthrough the written text but it is impossible to do somanually as an app can have millions of users and alsomillions of comments. So that machine learning can playa significant part here as there is no</s>
<s>shortage of data and itcan be a great way to improve the applications. SentimentAnalysis using Machine learning has already yielded very978-1-7281-5842-6/19/$31.00 ©2019 IEEEpromising results in user review based scenarios such as:IMDb (Internet Movie Database), Amazon store reviewand so on but despite of being a very popular applicationmarket place, the lack of research on this field can beseen. One major challenge to conduct sentiment analysison the play store reviews is the language barrier. With theadvancement of research based on machine learning andsentiment analysis, different libraries can be found whichcan simplify the whole process of text pre-processing toclassification of text for English language. To close downthis language barrier, text on Bangla language is chosennot only for it being the national language of but alsofor the fact that accounding to stat Counter1(Feb,2019)android holds 75 percent of total operating system marketshare in Bangladesh. Moreover, Bangla is a widely spokenlanguage currently sitting at the fourth position basedon the number of people speaking Bangla as their firstlanguage2.This paper shows the process followed to perform senti-ment analysis successfully starting from data fetching fromplaystore to applying various machine learning algorithmsand result comparison among the algorithm implementa-tions.II. Related WorkSentiment Analysis is a popular topic to work on andmany projects or researches done on movie review, twitterdata, product review etc.Microblog posts like tweets are used for classifyingsentiment by Phani, Lahiri and Biswas (2016). They triedto work with three different languages (Bangla, Tamil andHindi). They performed stratified 10-fold cross validationon the training data [5]. For cross validation, they exper-imented with word n-grams, character n-grams, surfacefeatures and Sentiword features. Agarwal, Xie, Vovsha,Rambow, Passonneau (2011) also used twitter data andthey introduced POS-Specific features which is based onpolarity. To perform the 3-way classification of tweets(positive, negative and neutral), they chose to work withthree types of model: Unigram model, a feature basedmodel and tree kernel based model [3].Sentiment analysis also done in epinions review by Tur-ney (2002). Author used unsupervised learning algorithmto identify sentiment and predicted a review by averagesemantic orientation of the phrases in the review thatcontains adjective or adverb. The Pointwise Mutual In-formation and Information Retrieval (PMI-IR) algorithmis used to estimate semantic orientation of the phrases[2]. Kiritchenko, Zhu, Cherry and Mohammad (2014) de-termined aspect terms, aspect categories and sentimentfrom customer reviews. They used PMI method to createsentiment lexicons and Brown clustering algorithm tocreate word clusters. They used semi-Markov tagger to1http://gs.statcounter.com/os-market-share/all/bangladesh2https://www.ethnologue.com/13/top100.htmltag token sequence and trained the tagger using the struc-tural Passive-Aggressive (PA) algorithm. They dividedtheir features into two categories: emission and transitionfeatures. They used multi-class SVM algorithm to classifysentiment (positive, negative, neutral and conflict) [4].Nguyen, Nguyen and Pham (2013) used Naïve Bayes andSVM to classify the sentiment in two stages. At first, theyused naïve bayes classifier to determine the sentiment.They forwarded the misclassified sentiments from naïvebayes to the second stage which was SVM to classifythose misclassified sentiments [1]. Tanmoy Chakrabortyand Sivaji Bandyopadhyay (2010) identified the redupli-cations at expression and semantic level in Bengali. Theyidentified the MWEs (multiword expression) at tokeniza-tion phase and then POS tagger identified those wordsas unknown words. Bengali Shallow Parser was</s>
<s>used toidentify the hyphened reduplications. They designed thesystem in two phases where the first phase identified fivecases of reduplications (complete, partial, onomatopoeic,correlative and semantic) and the second phase attemptedto extract the associate sense [6].III. Data DescriptionGoogle Play store represents different types of androidapplications, most of which come free. So the wholereview section is a reflection of opinions from people ofdifferent tastes and mentality. A web crawler was builtfrom the scratch for the data collection purpose whichyielded critical review information: user name, rating,review body. Some of the challenges were visible from theearly stage of data collection such as misspelling, lexicalvariation, slangs, emoticons. Figure 1 shows workingprocesses followed to collect Bangla dataset.Fig. 1. Process of dataset creationSelenium and Beautiful Soup are the library and theparser which were used for automated crawling of reviews.Reviews are collected from more than 100 apps (Bkash,Daraz, Uber etc.) and acquired around 10000 reviews and6500 unique words.TABLE IPreview of DatasetName Rating ReviewSharmin Akter 5 চমৎকার অ াপ। আরও নতুন বই চাই।রমজান েহােসন 1 েকউ ইনস্টল করেবন না। এটােত শ‌ুধুএড আর এড। খুবই ফাউল অ াপ। এতবােজ অ াপ আিম আেগ েদিখ নাই।Md. Sakib 3 সবিকছু ভাল েলেগেছ। িকন্তু েপজ নামব্ারেদয়া নাই। ফেল কত েপজ পড়া হেয়েছতা বুঝা কষ্টকর।Reviews are annotated manually by human annotatorsas positive, negative and neutral and denoted as 3, 1and 2 respectively. Few examples showing result of theannotation process are shown below.TABLE IIPreview of Dataset after AnnotationReview Annotationচমৎকার অ াপ। আরও নতুন বই চাই। 3েকউ ইনস্টল করেবন না। এটােত শ‌ুধুএড আর এড। খুবই ফাউল অ াপ। এতবােজ অ াপ আিম আেগ েদিখ নাই।সবিকছু ভাল েলেগেছ। িকন্তু েপজ নামব্ারেদয়া নাই। ফেল কত েপজ পড়া হেয়েছতা বুঝা কষ্টকর।Equal amount of reviews are taken for generating unbi-ased result and the dataset is divided into five parts forcross-validation purpose.IV. MethodologyThe steps involved in Sentiment Analysis are:Reviews PreprocessingFeatureSelec-tionSentimentClassificationSentimentPolarityFig. 2. Process of Sentiment AnalysisA. PreprocessingAll the reviews are pre-processed as follows:• With the help of WhitespaceTokenizer() method, to-kens were extracted from string of words or sentenceswithout whitespaces, new line and tabs• Words in each sentence were sorted according to thestandard sorting order defined by Bangla Academy1• A list containing 398 Bangla stopwords was takenfrom a github repository and more words were added1Bangla Academy sorting - https://github.com/banglakit/bangla-academy-sortto the list and used for removing stopwords2 was usedfor common stop word detection and removal• Each and every word in a review is not that signifi-cant, on the other hand certain words show their ownweightage by occurring number of times. This wasachieved through frequency distributionভাল অ াঅ াপস সুন্দডাউনেলাড বাে2004006008001,0001,2001,400WordFig. 3. Frequency distribution of words• Manual correction was done for spelling mistakes andlexical variations due to lack of any other efficientprocess for BanglaSentenceTokenizationStopword Removalচমৎকার অ াপ, আরও নতুনবই চাই।"চমৎকার", "অ াপ", "আরও", "নতুন", "বই", "চাই", "।""চমৎকার", "অ াপ", "নতুন","বই"Fig. 4. Preprocessing work flowB. Feature SelectionFeature in language processing scope refers to the nu-meric vectors converted from textual data. Since thetextual data here is of different language, building up afeature selection system from scratch seemed to the mostconvenient solution. To do so, Term Frequency-InverseDocument Frequency (TF-IDF)</s>
<s>method is chosen for it2Stop word - https://github.com/stopwords-iso/stopwords-bnbeing the most popular approach. TF-IDF has tackledthe issue of the most frequent words of being undesirablewith respect to algorithm implementation by assigningless weight such as the word "অ াপ" which is found mostfrequently in play store reviews but unnecessary for senti-ment analysis purpose and is weighted close to zero.C. Classification TechniquesThe dataset is a completely new one so, no externaldataset was available for testing. For this reason thedataset was divided into train set and test set and all thealgorithms were applied on them. Five fold cross validationwas applied to get unbiased result as there is no guaranteethat changing the test set will not yield accuracy score lessthan before. The techniques which are implemented are asfollows:a) Naïve Bayesian Classifier: In natural languageprocessing (NLP) problems, naïve bayes classifier is widelyused. Multinomial naïve bayes is used in this paper.This classifier calculates the probability of each tags ofa document and results the highest one. It works well fordata which can be easily turned into counts, such as wordcounts in text and TF-IDF vectorizer is being used to turnwords into numbers which involves word count.b) Support Vector Machine: A Support Vector Ma-chine (SVM) is a discriminative classifier formally definedby a separating hyperplane. In other words, given labeledtraining data (supervised learning), the algorithm outputsan optimal hyperplane which categorizes new examples. Intwo dimensional space this hyperplane is a line dividing aplane in two parts where in each class lay in either side.The vectors (cases) that define the hyperplane are thesupport vectors. Linear Support Vector Machine which hasbeen used in this paper is widely regarded as one of thebest text classification algorithms.c) Logistic Regression: Logistic Regression is a sim-ple and easy to understand classification algorithm and itcan be easily generalized to multiple classes. It assumes alinear, additive relationship between the predictions andlog odds of a classification. It analyzes a set a data pointswith one or more independent variables and finds the bestfitting model to describe data points using the logisticregression equation. Logistic Regression is very effectivefor problems in which the set of input variables is wellknown and closely correlated with the outcome.d) Ensemble Methods: Ensemble Methods is a ma-chine learning technique that combines several base mod-els in order to produce one optimal predictive model.Which algorithm works best for a certain scenario can notbe predicted in most cases before the experimentation. Soinstead of using only one algorithm and hoping for thebest, introducing Ensemble methods can ensure the betteralgorithm is taken into account by max voting system,averaging the results or by diving into advanced Ensembletechniques like bagging and boosting. The idea behindbagging is to combine the results of multiple models toget a generalized result but if all the models are createdon the same set of data, chances are these models willgive the same result since they are getting same input.The solution to this problem is bagging/bootstrapping. Itis a sampling technique in which subsets of observationsfrom the original dataset are created, with replacement.A problem can arise where a data point is incorrectlypredicted by</s>
<s>the first model, then the next which makescombining the predictions to provide better result useless.Such situation can be handled by boosting. This is alsoa sequential process where each subsequent model at-tempts to correct the errors in the previous model. Adap-tive boosting, Gradient Boosting and extreme GradientBoosting are tested as all of them are viable option forclassification problems with XGBoost already proven tobe a highly effective ML algorithm.V. Experimental ResultAfter performing Naïve Bayes, Support Vector Machineand Logistic Regression on our dataset which consistedaround about 10000 review (positive, negative and neu-tral) containing about 6500 unique words. The dataset wasdivided into five parts for cross validation and the bag-ging and boosting techniques were applied taking LogisticRegression as the baseline algorithm which generated thefollowing results:TABLE IIIResult ComparisonName of AlgorithmAccuracy Scorewith 5-fold without 5-foldNB 73.67% 75.98%SVM 76.48% 75.02%Logistic Regression 75.87% 75.98%Bagging Meta-estimator 75.44% 75.81%Adaptive Boosting 76.39% 75.90%Gradient Boosting 76.04% 76.95%Extreme Gradient Boosting 73.03% 72.64%Here it can be seen that although the naïve bayes doesnot provide the best result, it is certainly close to otheralgorithms. This indicates that naïve bayes model workswell even in heavy context situations. Next comes the SVMalgorithm which shows more promise than naïve bayes.SVM features a few kernel functions among which the sim-ple linear kernel works well and fast. Text is often linearlyseparable and has a lot of features which justifies usinglinear kernel for the test. After that, logistic regression wasimplemented which also showed promise. Implementationof logistic regression has been done solely due to the factthat it is a simple algorithm which can easily be gener-alized to multiple classes. Lastly, few boosting ensemblemethods were tested to see if they can improve the alreadyshown results as the nature of these algorithms is to try tocorrect the errors of the previous model. The results arenot necessarily better than the previous algorithms butgood none-the-less.Among the applied ML algorithms, SVM has the highestaccuracy score and has a good performance among themfor five-fold. On the other hand, Gradient Boosting givesthe best result when done without five-fold.VI. Limitations and Future PlanDuring this long process from data collection to senti-ment analysis, not everything went as planned and fewobstacles were on the way. Despite of utmost sincerity,this paper also bears some limitation which are as follows:• Limitation of how much data can be gathered willalways be an issue in case of machine learning wheremore data almost always lead to more accurate learn-ing. So bigger dataset was preferable• Due to lack of proper toolset for Bangla data pre-processing, the data needed to be cleaned manuallywhich was a hectic work• During the cleaning process, some of Bangla wordswere converted into ascii codes which then had to beremoved again to prevent further noise in the data• As dictionary of positive and negative words inBangla was not found, lexicon-based approach couldnot be performedThe work done so far is just the beginning of buildingup a sophisticated sentiment analysis tool for Banglalanguage focusing on the play store reviews. There werealso some ideas which could not be explored due to lackof time and the research group being</s>
<s>small. This ideaswhich are yet to be explored are as follows:• Gathering more and more data to build up a trustwor-thy and experimentally proven data which to be madefree for public so the research programs regarding thisfield can take a step forward• Artificial Neural Network(ANN) is a very popular ma-chine learning technique which for its unique insightsand complex calculations also demand a large and welldeveloped dataset. After dataset building is complete,the goal is to use ANN to perform sentiment analysis• Building up hybrid algorithms combining both basicand complex machine learning techniques which mayimprove the performance futherVII. ConclusionHere a system of playstore sentiment analysis is pre-sented where mainly two approaches: model based on gen-eral machine learning algorithms like SVM and ensembletechniques combining different algorithms are followed.Almost all the algorithms showed promising result afterthe training of dataset which was also created as a partof this research. Before training, the fetched data neededcleaning and proper annotation. Using TF-IDF enabledthe option to eliminate feature neutral words. The successof naïve bayes algorithm proves that the sentiment de-tection system can also ensure good results even withoutthe context although other algorithms have shown betterresults. So it can be concluded that sentiment analysis ongoogle play store Bangla data resembles other sentimentanalysis instances where more data can ensure the use ofthis research in a broader scope.References[1] Dai Quoc Nguyen, Dat Quoc Nguyen and Son Bao Pham 2013.A Two-Stage Classifier for Sentiment Analysis. InternationalJoint Conference on Natural Language Processing, pages 897–901, Nagoya, Japan, October 14-18, 2013.[2] Peter D.Turney 2002. Thumps Up or Thumps Down? SemanticOrientation Applied to Unsupervised Classification of reviews.Proceedings of the 40th Annual Meeting of the Association forComputational Linguistics (ACL), Philadelphia, July 2002, pp.417-424.[3] Apoorv Agarwal, Boyi Xie, Ilia Vovsha, Owen Rambow, Re-becca Passonneau. 2011. Sentiment Analysis of Twitter Data.In Proceeding of the Workshop on Languages in Social Media,LSM’11, pages 30-38, Stroudsburg, PA, USA, Association forComputational Linguistics.[4] Svetlana Kiritchenko, Xiaodan Zhu, Colin Cherry, and Saif M.Mohammad 2014. Detecting Aspects and Sentiment in Cus-tomer Review. Proceedings of the 8th International Workshopon Semantic Evaluation (SemEval 2014), pages 437-442, Dublin,Ireland, August 23-24, 2014.[5] Shanta Phani, Shibamouli Lahiri, Arindam Biswas. SentimentAnalysis of Tweets in Three Indian Languages. Proceedingsof the 6th Workshop on South and Southeast Asian NaturalLanguage Processing, pages 93–102, Osaka, Japan, December11-17 2016.[6] Tanmoy Chakraborty and Sivaji Bandyopadhyay. Identificationof Reduplication in Bengali Corpus and their Semantic Analysis:A Rule-Based Approach. Proceedings of the Multiword Expres-sions: From Theory to Applications (MWE 2010), pages 73–76,Beijing, August 2010[7] Dipankar Das, Sivaji Bandyopadhyay. Labeling Emotion inBengali Blog Corpus – A Fine Grained Tagging at SentenceLevel. Proceedings of the Eighth Workshop on Asian LanguageResources, pages 47-55, Beijing, China, August 2010.</s>
<s>Sentiment Analysis Using Out of Core Learning2019 International Conference on Electrical, Computer and Communication Engineering (ECCE), 7-9 February, 2019Sentiment Analysis Using Out of Core LearningMahmudul Hasan, Ishrak Islam, K. M. Azharul HasanDepartment of Computer Science and EngineeringKhulna University of Engineering and Technology, BangladeshEmail: mahmudul@cse.kuet.ac.bd, ishrak.islam@gmail.com, az@cse.kuet.ac.bdAbstract—Text sentiment detection for a particular languageother than English is one of the challenging tasks presently. Thereasons are; it needs a large dataset, language has no specificstructure, one word has a different meaning, and it is hard foreven human to understand the connotation of particular words.There exists several proposed architecture for detecting emotionsin the Bengali language using machine learning and deep learningapproaches, but they are not accurate enough to predict theperfect emotion of the sentence. And there is still no standalonearchitecture is available that can extract the sentiments hiddeninside of a sentence in different languages. In this paper, we areproposing an abstract model that can enable sentiment analysiswithout any restriction of using a fixed language somewhatapplicable to any language. With the use of natural languageprocessing, we have extracted the features, and these features arethen fed to different machine learning models for classification.As our main concern was to build up a general model, this modelis confined to binary classification, i.e., positive and negative.Apart from this, In our system architecture, we have implementedstochastic gradient descent for optimization. So our model canbe called out of core learning model where the model can beupdated when new user data is inserted without training thewhole model. For the evaluation of the performance of our model,we have trained the estimators against Bangla translated IMDBreview dataset and calculated different evaluation metrics for ourestimators. The dataset is translated into Bangla using googletranslator.Index Terms—Sentiment Analysis, Logistic regression, Per-ceptron, Data Preprocessing, Feature Extraction, Tokenizing,Stemming, Tagging, Stemmer, TF-IDF.I. INTRODUCTIONTremendous development in computer science is happen-ing as more decades are passing by. Computers are mademore capable and more user-friendly. Besides the enormousamount of information like text, image and video are generatedover the internet every day. Classification of ascending data,filtering spam, defining the meaning and finding a patternof this kind of massive data become a significant issue inpresent era [1]. For solving these problems and making digitalcomponents more humanoid, human languages are needed tobe understood by the computer. Now with the help of naturallanguage processing and machine learning algorithm it hasbecome easy for us to convert human texts in more likecomputer form. One of the applications of this field is calledsentiment analysis which involves building a system to collectand categorize human opinions for a particular subject [2], [3].Also, it predicts future patterns and behaviors, allowing deci-sion makers to make decisions more practically. Automatedsentiment analysis often uses machine learning, to mine textfor sentiment. Sentiment analysis can be useful in severalways. It is essential for organizations which sell productsonline because reviews of products have a significant impacton the purchase. Different types of social media are usedas a marketing platform nowadays because people feel morecomfortable sharing their opinion about products on socialmedia. Moreover, other ones are getting more biased towardsthis review, comment,</s>
<s>and feedback about the products. Hencesentiment analysis has become a new business strategy in thisera. Again another thing essential for developing an artificiallyintelligent agent that it should rely upon the human opinionto make decisions [4], [5] The main application of sentimentclassification are in the broad areas on text mining and naturallanguage processing including to get product review, opinionin a specific matter and text classification.II. RELATED WORKThere have been several approaches taken by the researchersto model human language in machine form [6]–[8]. [9] pre-sented a sentiment detection technique using valancy of aword. But the valacy was calculated using English sentiword-net. A N-gram based sentiment detection was proposed in [10]for bangla natural text. The sohwed the performance increasesfor 2-grams when negative words occurs more in a sentence.In [11] the authors used Long Short-Term Memory Network(LSTM) and Gated Recurrent Units (GRU) to classify ticketin the right category in the ticket system. Total classes in thispaper were 66 and data size was 217,000. In [12] python,nltk was used for processing text, and Naive Bayes algorithmwas applied for classifying text. In [13] they predicted user’ssentiment polarity using Lexicon based classifier. They createddataset manually from Flipkart, Snapdeal, Amazon and somereview forums. However, they have not taken any machinelearning approach. In [14] they gave a review on some papercontaining text classification using clustering and principalcomponent analysis (PCA). [15] authors gave an idea abouttwo architectures for online advertising. The proposed archi-tecture is Stand-alone Architecture and Scalable Repository-based Architecture. In [16] they proposed Multinomial NaiveBayes (MNB) classifier combined with Bayesian Networks(BN) classifier incorporated with feature extraction and featureselection. Ten-fold cross-validation model was used for eval-uation. They have used review polarity dataset and Reuters-21578 text collection for training and testing. A web basedsentiment classification for Bangla text was implemented in978-1-5386-9111-3/19/$31.00 ©2019 IEEEFig. 1: Implementation of filtering[17]. The authors presented diverse combination of Banglawords from the web using SVM.III. PROPOSED FRAMEWORK FOR SENTIMENTANALYSISIn this paper, we propose a framework for sentiment miningwhere the following five phases are sequentially combined fordata extraction, preprocessing and building a model.A. PreprocessingBefore feeding data to estimators, we have to clean the textdatasets. From the knowledge of Natural Language Processing,we can filter sentences and get the more significant features.The steps are shown in figure 1:1) Cleaning all text data: First of all, the text mightcontain HTML markup as well as punctuation and other non-letter characters. While HTML markup does not contain muchhelpful semantics but punctuation marks can represent useful,additional information in specific NLP contexts. We used aregular expression like< [>]∗ >= Detects anytype of Markup Languageand(? :: |; | =)(? : −)?(? :)|(|D|P ) = DetectsEmoticons from textfor finding out HTML markup and emoticons. After that,we can strip out these from the texts.2) Tokenizing data into sentences: Tokenization of datameans that we have to split sentence or phrases into wordsor tokens. Tokenization is mandatory in preprocessing. To-kenization of a sentence is necessary for filtering essentialwords [18]. Suppose a sentence ”Quick Brown Fox” can betokenized into three words ”Quick”, ”Brown” and ”Fox”.3) Stemming and Lemmatizing words : Stemming</s>
<s>is a strat-egy to expel fastens from a word, winding up with the stem.By far, there are four established stemmers: PorterStemmer,RegexpStemmer, SnowballStemmer, LancasterStemmer [1].We have utilized PorterStemmer for execution of assessmentmining.Like Stemming, Lemmatization does the same work. It ismainly used to finding out the root word. It truncates theinsignificant portion of words. It also finds the valid lemma andfinds out if it exists in a dictionary or not. For Lemmatization,We used Princeton university’s wordnet database. Figure 1(3)depicts the process of Stemming and Lemmatization.4) Part of Speech Tagging: There are many types of wordsin each, and it is required to identify the label of each word.For this purpose part of speech tagging is used. After thatpart of speech tagging tuples are created. It remains in aspecific form where the list is generated using words andtags respectively. Part of speech is mainly eight types. Amongthem, our primary concern is on the noun, adjective, and verbwhich is the sentiment or sense of the sentence [1]. Suppose ifwe apply POS Tagging on Sentence ”John Likes Her” then itwill be ”John (Determiner)”, ”Likes (Verb)”, ”Her (Pronoun)”.5) Extracting Chunks from the sentence: After part ofspeech tagging there remains many unnecessary and duplicatewords. So it should be filtered out. For this reason, extractionof chunks from the sentence is applied. After extraction, thesentence takes a form of a tree of chunks which can be easilytransformed and processed [1].6) Transforming chunks: Transforming of chunks meansreplacing words where rearranging words and correcting themtakes place without changing the meaning. The changes canresemble as follows:• Swapping verb phrase• Transforming plural noun into singular• Swapping Infinitive Phrase• Eradicating unnecessary wordsThe chunk transforms are for grammatical correction andrearranging phrases without loss of significance. Figure 1 (5)represents the process.7) Refactoring to words: After transformation, chunks arerefactored to extract a feature from it. Supposing after pre-processing a sentence ”The book of recipes is delicious”is converted to ”(’delicious’, ’JJ’), (’recipe’, ’NN’), (’book’,’NN’)”. After refactoring, we will get ”Delicious RecipeBook”.B. Feature ExtractionFeature extraction can be done in many ways, but we haveused the following two techniques:1) Term Frequency-Inverse Document Frequency (TF-IDF):Machine learning algorithm needs the data to be representedin a particular form otherwise the algorithms cannot be imple-mented. So we need to transform the text data into a featureset. In this case, the TF-IDF vectorizer is utilized. It is theweighting factor of the document. It represents how much aword or term is important for the document. Supposing thecounted value of the word ”W1” is more than the countedvalue of the word ”W2” in the document. So the word ”W1”is more important for that document. So the weighting factoris made much bigger in case of ”W1” word. According to [2]the implementation of TF-IDF is following:tf-idf(x,y) = tf(x,y) * idf(x,y)where,• tf-idf(x,y) = TF-IDF value of a term calculation result• tf(x,y) (term frequency) = Number of same term indocument• idf(x,y) (inverse document frequency) = Calculation oflog multiplied by the inverse probability of a term beingfound in any documentWe have shown TF-IDF of a sample of our dataset in FigureFig. 2: Word</s>
<s>to vector implementation2) Hashing Vectorizer: When there is a massive amountof data, it is hard to vectorize the whole data. So in thiscase instead of TF-IDF hashing vectorizer is used. In the caseof hashing vectorizer, it does not consider inverse documentfrequency. Instead, it just hashes each term frequency in sucha way that collision is avoided [?].After hashing vectorizer is implemented a token is mappedto feature integer. [2]Hashing vectorizer has following advantages:• It is memory efficient. So there is no need to store thewhole dictionary of data.• Hash functions are an efficient way of mapping terms tofeatures, and it does not necessarily need to be appliedonly to term frequencies.• As there is no state computed during the fit, it can beutilized as a part of streaming (partial fit) or parallelpipeline.• It may be possible to reduce the length of the hashfeature vector, and so the complexity will also reducesignificantly with an acceptable loss of effectiveness oraccuracy.• It is fast and simple.• Handling of missing data is easy with hashing vectorizer.There are also a couple of cons (vs. using a CountVectorizerwith an in-memory vocabulary):• There is no inverse mapping.• Hash collisions may occur at any time.• There is no IDF weighting.• A hash table does not accept any null values.• Based on document frequency hashing vectorizer can notlimit features.C. Training the estimatorsFor training the estimators, following things must be han-dled.1) Developing a system that can read data from differentsources like web, hard drive, database, etc.2) A way of extracting features from these data which isdescribed earlier.3) An incremental algorithm.We have used the following classifiers:1) Perceptron: Rosenblatt’s threshold perceptron model isto use a reductionist approach to mimic how a single neuron inthe brain works: it either fires, or it does not [19]. The figure3 illustrates the general concept of the perceptron:Fig. 3: PerceptronThe equation for a perceptron is as following:ŷ = w1 ∗ x1 + w2 ∗ x2 + ...+ wn ∗ xnhere,ŷ = predicted valuew1, w2, ...wn = weight matrixx1, x2, ...xn = input matrixSuppose we are considering two input x1 = 1 and x2 = 0 andweights are w1 = .8 and w2 = .7, then predicted value wouldbe .8 ∗ 1 + .7 ∗ 0 = .8 .2) Logistic Regression: It is similar to the perceptron,but an extra sigmoid function is added here [19]. Figure 4illustrates the concept of the Logistic Regression:Fig. 4: Logistic Regression ClassifierThe equation for sigmoid function is as follows:Sigmoid(x) =1 + e−x. So considering the same input that was for perceptron wewill get sigmoid value = 11+e−.8 = .342667 .3) Multinomial Naive Bayes: Naive Bayes is a simplealgorithm which is based on the Bayesian theorem. However,Multinomial Naive Bayes is a specialized version of NaiveBayes. They are both supervised learning method [20]. Insteadof other distribution, it implements multinomial distribution.Which is mostly related to counting. Hence Multinomial NaiveBayes is more suitable for text processing. Following showsthe equation of MNB:P (D|d) = logi=1logti + αi=1t+ αwhere,P (X|c) = probability of document D in class dNc = total documents in</s>
<s>class dN = total documentsti = weighting factor ti=1t = total weighting factor in class dα = smoothing parameter4) Passive Aggressive Classifier: Passive Aggressive Clas-sifier is one of the best classifiers for online learning [21]. Ifthe model correctly classifies the data, then it is kept, but if itgives the wrong classification, the model is adapted accordingto the newly entered correct data.D. Dataset DescriptionTABLE I: Portion of our DatasetNo. Review Sentiment0 I went and saw this movie last ... 11 Actor turned director Bill Paxton ... 12 As a recreational golfer with ... 13 I saw this film in a sneak preview ... 14 Bill Paxton has taken the true story ... 1Our model is based on IMDB Review dataset described in[22]. The movie review dataset consists of 50,000 polar moviereviews that are labeled as either positive or negative. TableI shows the portion of IMDB Dataset. We have used googletranslator to translate these reviews into Bengali language andtrained our estimators against the translated dataset. Thereare fifty thousand reviews in the translated dataset which wehave split into eighty to twenty ratio for training and testingrespectively.E. Evaluation CriteriaFor any classification problems and for sentiment analysisalso, following evaluation metrics should be calculated.1) Accuracy: Accuracy is the measure of how correctly aclassifier classifies a data. The equation for the accuracy is asfollowing:Accuracy =T.P. + T.N.T.P. + T.N. + F.P. + F.N.Where, T.P. = True Positive, T.N. = True Negative, F.P. = FalsePositive and F.N. = False Negative.2) Precision: Precision depicts that how much precisely aclassifier can classify the data even if it is a wrong class. Butwith the help of both accuracy and precision calculation wecan estimate classifier’s performance. For a single class theprecision value is given in the following equation.Precision =T.P.T.P. + F.P.3) Recall: Recall means that from the number of samplesof one class how many samples are correctly classified as thatclass. The equation for the recall is following:Recall =T.P.T.P. + F.N.4) F1 Score: F1 score is the weighted average of Precisionand Recall. Equation of the F1 score is following:F1 Score =2 ∗ T.P.2 ∗ T.P. + F.P. + F.N.IV. RESULTFigure 5 shows the comparison of accuracy between es-timators. In this case, the estimators are trained and testedagainst IMDB’s binary labeled translated database [23], [24].After building the model, our goal is to take user responsesand train the estimators based on them and give a user adynamic model. We have used web framework Flask fromFig. 5: Evaluationpython and SQLite Database query for implementation ofour work. The performance result is quite confusing fordifferent estimators. Multinomial Naive Bayes shows the poorperformance whereas the passive-aggressive shows better, butit shows some deviations. Stochastic Gradient Descent andPerceptron shows average performance. After updating data-sets in the database, we are going to fit the estimators againagainst new datasets with a repeat of the processes. Table IIshows different evaluation metrics for different estimators.TABLE II: Evaluation Metrics of the EstimatorsEstimators Acc. Pr. Rc. F1Logistic Regression 0.88 0.85 0.91 0.88Perceptron 0.86 0.85 0.87 0.87Passive Aggressive 0.89 0.88 0.89 0.88Multinomial Naive Bayes 0.87 0.87 0.87 0.87Ensemble</s>
<s>Using Mean 0.88 0.86 0.90 0.88Acc. = Accuracy, Pr. = Precision, Rc. = RecallVenkataraman et al. in article [11] used deep learning frame-work for English language text classification. However, theyhave not given any analysis report using a machine learningapproach. The authors of [12], [15] used only Naive Bayestechnique which was also on the English language. Authorsof [15] have not used any machine learning or deep learningtechnique. Compared to other articles we have tried to buildlanguage independent model and gave analysis on BengaliLanguage data. If our framework is sequentially followed, wecan classify sentiments of any language.V. CONCLUSIONOur research gives a comparison between classificationtechniques for sentiment analysis. In the case of sentimentanalysis, the same word can have a different sentiment. Wesuppose a word lust. If the sentence is ”Lust for her” thenit is considered as a negative sentence but if the sentence is”Lust for knowledge” then the sentence can be considered asa positive opinion. Hence it is quite hard building up predictorwith a high level of accuracy over a large scale data containingdiverse classes. Besides, nowadays text is stored in many formslike the unstructured and semi-structured format that makes itcomplicated to maintain this type of data and finding a patternfor them. With the fast development of data, the requirementof high accuracy of content classification is expanding. Themost effective method of sentiment analysis is complicatedto determine as a result shows a very diverse performance.Besides we have used translated Bengali dataset from IMDBreview dataset for preprocessing which contains much noiselike after translating a sentence in Bengali, there remainedsome English words. The work can be extended to emotiondetection from text. Since sentiments are part of the basicemotions of a text.REFERENCES[1] J. Perkins. Python 3 Text Processing with NLTK 3 Cookbook. PacktPublishing, Birmingham B3 2PB, UK, 2014.[2] A. A. Hakim, A. Erwin, K. I. Eng, M. Galinium, and W. Muliady.Automated document classification for news article in bahasa indonesiabased on term frequency inverse document frequency (tf-idf) approach.In 2014 6th International Conference on Information Technology andElectrical Engineering (ICITEE), pages 1–4, Oct 2014.[3] KM Azharul Hasan, Sajidul Islam, GM Mashrur-E-Elahi, and Moham-mad Navid Izhar. Sentiment recognition from bangla text. In TechnicalChallenges and Design Issues in Bangla Language Processing, pages315–327. IGI Global, 2013.[4] Princeton University. ”about wordnet.” wordnet. princeton university.2010., October 2018. [Online]. http://wordnet.princeton.edu, [Accessed:2019-01-10].[5] scikit learn.org. Strategies to scale computationally: bigger data, October2018. [Online]. http://scikit-learn.org/stable/modules/scaling strategies.html#strategies-to-scale-computationally-bigger-data, [Accessed: 2019-01-10].[6] Shaika Chowdhury and Wasifa Chowdhury. Performing sentimentanalysis in bangla microblog posts. In 2014 International Conferenceon Informatics, Electronics & Vision (ICIEV), pages 1–6. IEEE, 2014.[7] Vivek Kumar Singh. Sentiment analysis research on bengali languagetexts. 2015.[8] Muhammad Mahmudun Nabi, Md Tanzir Altaf, and Sabir Ismail.Detecting sentiment from bangla text using machine learning techniqueand feature analysis. International Journal of Computer Applications,153(11), 2016.[9] KM Azharul Hasan, Mosiur Rahman, et al. Sentiment detectionfrom bangla text using contextual valency analysis. In Computer andInformation Technology (ICCIT), 2014 17th International Conferenceon, pages 292–295. IEEE, 2014.[10] SM Abu Taher, Kazi Afsana Akhter, and KM Azharul Hasan. N-grambased sentiment mining for bangla text using support vector machine.In 2018 International Conference on</s>
<s>Bangla Speech and LanguageProcessing (ICBSLP), pages 1–5. IEEE, 2018.[11] A. Venkataraman. Deep learning algorithms based text classifier. In 20162nd International Conference on Applied and Theoretical Computingand Communication Technology (iCATccT), pages 220–224, July 2016.[12] L. Jin, W. Gong, W. Fu, and H. Wu. A text classifier of english moviereviews based on information gain. In 2015 3rd International Conferenceon Applied Computing and Information Technology/2nd InternationalConference on Computational Science and Intelligence, pages 454–457,July 2015.[13] S. Mandal and S. Gupta. A lexicon-based text classification model toanalyse and predict sentiments from online reviews. In 2016 Interna-tional Conference on Computer, Electrical Communication Engineering(ICCECE), pages 1–7, Dec 2016.[14] M. Kaur and M. Bansal. Text classification using clustering techniquesand p.c.a. In 2016 Fourth International Conference on Parallel,Distributed and Grid Computing (PDGC), pages 642–646, Dec 2016.[15] A. Z. Adamov and E. Adali. Opinion mining and sentiment analysis forcontextual online-advertisement. In 2016 IEEE 10th International Con-ference on Application of Information and Communication Technologies(AICT), pages 1–3, Oct 2016.[16] A. Rahman and U. Qamar. A bayesian classifiers based combinationmodel for automatic text classification. In 2016 7th IEEE InternationalConference on Software Engineering and Service Science (ICSESS),pages 63–67, Aug 2016.[17] Mir Shahriar Sabuj, Zakia Afrin, and KM Azharul Hasan. Opinionmining using support vector machine with web based diverse data. InInternational Conference on Pattern Recognition and Machine Intelli-gence, pages 673–678. Springer, 2017.[18] KM Hasan, Amit Mondal, Amit Saha, et al. Recognizing banglagrammar using predictive parser. arXiv preprint arXiv:1201.2010, 2012.[19] S. Raschka. Python Machine Learning. Packt Publishing, BirminghamB3 2PB, UK, 2015.[20] B. Y. Pratama and R. Sarno. Personality classification based on twittertext using naive bayes, knn and svm. In 2015 International Conferenceon Data and Software Engineering (ICoDSE), pages 170–174, Nov2015.[21] Koby Crammer, Ofer Dekel, Joseph Keshet, Shai Shalev-Shwartz, andYoram Singer. Online passive-aggressive algorithms. J. Mach. Learn.Res., 7:551–585, December 2006.[22] Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, An-drew Y. Ng, and Christopher Potts. Learning word vectors for sentimentanalysis. In Proceedings of the 49th Annual Meeting of the Associationfor Computational Linguistics: Human Language Technologies - Volume1, HLT ’11, pages 142–150, Stroudsburg, PA, USA, 2011. Associationfor Computational Linguistics.[23] E. Diemert. Out-of-core classification of text documents, October2018. [Online]. http://scikit-learn.org/stable/auto examples/applications/plot out of core classification.html, [Accessed: 2019-01-10].[24] M. Lichman. UCI machine learning repository, university of california,irvine, school of information and computer sciences, 2013. [Online].http://archive.ics.uci.edu/ml, [Accessed: 2019-01-10].</s>
<s>Paper Title (use style: paper title)See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/313925459Supervised Approach of Sentimentality Extraction from Bengali FacebookStatusConference Paper · December 2016DOI: 10.1109/ICCITECHN.2016.7860228CITATIONSREADS4014 authors, including:Some of the authors of this publication are also working on these related projects:Artificial Intelligence Lab View projectBangla Optical Character Recognition View projectMd Saiful IslamShahjalal University of Science and Technology70 PUBLICATIONS 237 CITATIONS SEE PROFILEMd Ashiqul Islam8 PUBLICATIONS 31 CITATIONS SEE PROFILEMd Afjal HossainShahjalal University of Science and Technology3 PUBLICATIONS 13 CITATIONS SEE PROFILEAll content following this page was uploaded by Md Saiful Islam on 30 March 2018.The user has requested enhancement of the downloaded file.https://www.researchgate.net/publication/313925459_Supervised_Approach_of_Sentimentality_Extraction_from_Bengali_Facebook_Status?enrichId=rgreq-5d385fb7c5183e5086b65d908c928fe6-XXX&enrichSource=Y292ZXJQYWdlOzMxMzkyNTQ1OTtBUzo2MDk4ODQ4MTA3ODA2NzNAMTUyMjQxOTI3ODk0OQ%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/313925459_Supervised_Approach_of_Sentimentality_Extraction_from_Bengali_Facebook_Status?enrichId=rgreq-5d385fb7c5183e5086b65d908c928fe6-XXX&enrichSource=Y292ZXJQYWdlOzMxMzkyNTQ1OTtBUzo2MDk4ODQ4MTA3ODA2NzNAMTUyMjQxOTI3ODk0OQ%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Artificial-Intelligence-Lab?enrichId=rgreq-5d385fb7c5183e5086b65d908c928fe6-XXX&enrichSource=Y292ZXJQYWdlOzMxMzkyNTQ1OTtBUzo2MDk4ODQ4MTA3ODA2NzNAMTUyMjQxOTI3ODk0OQ%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Bangla-Optical-Character-Recognition?enrichId=rgreq-5d385fb7c5183e5086b65d908c928fe6-XXX&enrichSource=Y292ZXJQYWdlOzMxMzkyNTQ1OTtBUzo2MDk4ODQ4MTA3ODA2NzNAMTUyMjQxOTI3ODk0OQ%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-5d385fb7c5183e5086b65d908c928fe6-XXX&enrichSource=Y292ZXJQYWdlOzMxMzkyNTQ1OTtBUzo2MDk4ODQ4MTA3ODA2NzNAMTUyMjQxOTI3ODk0OQ%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Saiful_Islam14?enrichId=rgreq-5d385fb7c5183e5086b65d908c928fe6-XXX&enrichSource=Y292ZXJQYWdlOzMxMzkyNTQ1OTtBUzo2MDk4ODQ4MTA3ODA2NzNAMTUyMjQxOTI3ODk0OQ%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Saiful_Islam14?enrichId=rgreq-5d385fb7c5183e5086b65d908c928fe6-XXX&enrichSource=Y292ZXJQYWdlOzMxMzkyNTQ1OTtBUzo2MDk4ODQ4MTA3ODA2NzNAMTUyMjQxOTI3ODk0OQ%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Shahjalal_University_of_Science_and_Technology?enrichId=rgreq-5d385fb7c5183e5086b65d908c928fe6-XXX&enrichSource=Y292ZXJQYWdlOzMxMzkyNTQ1OTtBUzo2MDk4ODQ4MTA3ODA2NzNAMTUyMjQxOTI3ODk0OQ%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Saiful_Islam14?enrichId=rgreq-5d385fb7c5183e5086b65d908c928fe6-XXX&enrichSource=Y292ZXJQYWdlOzMxMzkyNTQ1OTtBUzo2MDk4ODQ4MTA3ODA2NzNAMTUyMjQxOTI3ODk0OQ%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Islam689?enrichId=rgreq-5d385fb7c5183e5086b65d908c928fe6-XXX&enrichSource=Y292ZXJQYWdlOzMxMzkyNTQ1OTtBUzo2MDk4ODQ4MTA3ODA2NzNAMTUyMjQxOTI3ODk0OQ%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Islam689?enrichId=rgreq-5d385fb7c5183e5086b65d908c928fe6-XXX&enrichSource=Y292ZXJQYWdlOzMxMzkyNTQ1OTtBUzo2MDk4ODQ4MTA3ODA2NzNAMTUyMjQxOTI3ODk0OQ%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Islam689?enrichId=rgreq-5d385fb7c5183e5086b65d908c928fe6-XXX&enrichSource=Y292ZXJQYWdlOzMxMzkyNTQ1OTtBUzo2MDk4ODQ4MTA3ODA2NzNAMTUyMjQxOTI3ODk0OQ%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Hossain245?enrichId=rgreq-5d385fb7c5183e5086b65d908c928fe6-XXX&enrichSource=Y292ZXJQYWdlOzMxMzkyNTQ1OTtBUzo2MDk4ODQ4MTA3ODA2NzNAMTUyMjQxOTI3ODk0OQ%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Hossain245?enrichId=rgreq-5d385fb7c5183e5086b65d908c928fe6-XXX&enrichSource=Y292ZXJQYWdlOzMxMzkyNTQ1OTtBUzo2MDk4ODQ4MTA3ODA2NzNAMTUyMjQxOTI3ODk0OQ%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Shahjalal_University_of_Science_and_Technology?enrichId=rgreq-5d385fb7c5183e5086b65d908c928fe6-XXX&enrichSource=Y292ZXJQYWdlOzMxMzkyNTQ1OTtBUzo2MDk4ODQ4MTA3ODA2NzNAMTUyMjQxOTI3ODk0OQ%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Hossain245?enrichId=rgreq-5d385fb7c5183e5086b65d908c928fe6-XXX&enrichSource=Y292ZXJQYWdlOzMxMzkyNTQ1OTtBUzo2MDk4ODQ4MTA3ODA2NzNAMTUyMjQxOTI3ODk0OQ%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Saiful_Islam14?enrichId=rgreq-5d385fb7c5183e5086b65d908c928fe6-XXX&enrichSource=Y292ZXJQYWdlOzMxMzkyNTQ1OTtBUzo2MDk4ODQ4MTA3ODA2NzNAMTUyMjQxOTI3ODk0OQ%3D%3D&el=1_x_10&_esc=publicationCoverPdfSupervised Approach of Sentimentality Extraction from Bengali Facebook Status Md. Saiful Islam Computer Science and Engineering Shahjalal University of Science & Technology Sylhet, Bangladesh saiful-cse@sust.edu Md. Ashiqul Islam Computer Science and Engineering Shahjalal University of Science & Technology Sylhet, Bangladesh rajib.sust47@gmail.com Md. Afjal Hossain Computer Science and Engineering Shahjalal University of Science & Technology Sylhet, Bangladesh afjal.sm19@gmail.com Jagoth Jyoti Dey Computer Science and Engineering Shahjalal University of Science & Technology Sylhet, Bangladesh jagothjyotidey91@gmail.com Abstract— Sentiment is the only things that separate human and machine. To simulate the feelings for machines many researchers have been trying to create method and automated the process to extract opinion of particular news, product or life entity. Sentiment Analysis (SA) is a combination of opinions, emotions and subjectivity of a text. Currently SA is the most demanding task in Natural Language Processing. Social networking site like Facebook are mostly used in expressing the opinions about a particular entity of life. Newspaper published news about a particular event and user expressed their feedback in news comments. Online product feedback is increasing day by day. So reviews and opinions mining play a very important role in understanding people satisfactions. Such opinion mining has potential for knowledge discovery. The main target of SA is to find opinions from text extract sentiments from them and define their polarity, i.e positive or negative. In this domain most of the model was designed for English Language. This paper describes a novel approach using Naïve Bayes classification model for Bengali Language. Here a supervised classification method is used with language rules for detecting sentiment for Bengali Facebook Status. Keywords: Antonyms word, Naïve Bayes Rules, N-gram model, Parts of Speech (POS) Tagger, Stemming. I. INTRODUCTION Smart phone on everybody’s hand, Personal computer on every yard, people like to share information. They often use blogs, forum, e-news and social networking sites like Facebook, twitter to express their views and opinions. Huge amount of content is generated day by day thus mining data and extracting user sentiment is an important task. [1] Sentiment analysis is the process of using text analytics to mine various sources of data for opinions. Often, sentiment analysis is done on the data that is collected from the Internet and from various social media platforms. Politicians and governments often use sentiment analysis to understand how the people feel about themselves and their policies. Bengali is the language</s>
<s>native to Bangladesh and the Indian state of West Bangla. Bangla is the national language in Bangladesh and second most spoken language in India. With about 250 million native and about 300 million total speakers are worldwide. It is the seventh most spoken language in the world by total number of native speakers and the eleventh most spoken language by total number of speakers [2]. Bangla language structure is very flexible in compare to English. Suppose the general structure of an English language: Subject + Verb + Object. (Example: I love you). Any other ordering of this sentence is incorrect. But in Bengali language (I love you) any order of this sentence is correct. Bangla is a free order, high morphological language. Therefore, Data collection, generation, anomaly detection, features extraction take high challenges. The primary goal of this paper is to analyze the sentiment of individual Facebook status. II. RELATED WORK In Bangla language natural language processing is not rich as English language. In the field of sentiment analysis, small amount of work has been done in Bangla language. Lack of available datasets or dependable API’s like SenticNet, SentiWordNet, WordNet-Affect were difficult to continue the sentiment analysis task. In early stage Das and Bandopadhya design and developed a sentiwordnet for Bengali language using English-Bangla dictionary. 35805 words were created by them [3]. mailto:saiful-cse@sust.edumailto:rajib.sust47@gmail.commailto:afjal.sm19@gmail.commailto:jagothjyotidey91@gmail.comK. M. Azharul Hasan Mosiur Rahman, Badiuzzaman [4] design a model by using the WorldNet API to get the senses of each word according to its parts of speech and SentiWordNet API to get the prior valence (i.e. polarity) of each word. They calculate the total positivity, negativity and neutrality of sentence or document with respect to total sense. Model accuracy is 76 %. Shaika Chowdhury Wasifa Chowdhury [5] design a model by using SentiWordNet, WorldNet API for lexical analysis and implement support vector machine and maximum entropy model to detect the sentiment of micro blog. Besides, they gave preference of emoticons. Model accuracy up to 93% by using unigram and emoticon features. In Natural Language Processing, English is the most popular language for research. For sentiment analysis there many model designed and developed for extract the sentimentality. Most approaches used in this area are [6]  Subjective Lexicon  N-gram modeling  Supervised Classification Method We choose supervised classification method to detect the sentimentality of Bengali Facebook status. In sentiment analysis many machine learning approaches were taken for detect the sentimentality. Naïve based classification, Maximum entropy and support vector machine, N-gram approach along with POS information is used to perform machine learning for determining the polarity in English Language(i.e. positive or negative) [7] A deep neural network approaches were taken for assign polarity of English text. It proposed an efficient embedding for modeling higher-order (n-gram) phrases that projects the n-grams to low-dimensional latent semantic space, where a classification function can be defined. [8] We choose Naïve based classification method along with Bi-gram and linguistic analysis for sentiment analysis in Bengali Facebook Status. III. OUR METHODOLOGY A. Data</s>
<s>Domain Two most popular and common social networking sites are Facebook and Twitter. Generally, in South Asia Facebook is more popular and common than Twitter. People from South Asia used Facebook to Ads their local product, services. Many followers provide product feedback by comments or status at many Facebook Groups. They provide their comments by their native language as Bengali. Facebook users are increasing day by day and in the meantime Bengali contents are increasing too. As almost 300 million of people in the world are using Bengali and most of them use Facebook So Facebook is the best option to retrieve data for our thesis work. Many works has already done to analysis the sentiment of Twitter data. But sentiment analysis on Facebook Bengali data is exiguous. B. Data Collection We have consolidated user’s comment from Facebook manually. And give them a structural format and tagged the data set either positive or negative. We collect above 1000 positive comments and 1000 negative comments for training purpose and approximately 500 comments for testing purpose. All the comment we collected is public. TABLE-I. Showing corpus data statistics. Parameters Positive Negative Status 1000 1000 Unique Words 2176 2234 Adjective 823 768 Valance Shifter words frequency 306 378 Fig.1. Number of positive, negative status, number of adjective, number of valance shifter (VS) words, Unique Words (UW). C. Preprocessing Data preprocessing and cleaning step is playing an important role in machine learning. Preprocessing includes remove symbols like hashtags (#), websites URLs etc. Stemming data words is also a part of preprocessing. For words stemming we used nltr open source software [9] D. Negation Handling In Bengali some words are used to negate the sentence polarity. Those words are These words are called valance shifting words. Those valance shifting words play an important role in our sentiment detection. 5001000150020002500Status UW Adjective VSWPositive NegativeTABLE-II. Showing Bengali sentence and their polarity Sentences Polarity In TABLE-II shows one valance shifter word can change the polarity of a sentence (row-1 & row-2, row-3 & row-4). Those valance shifter words occurs in positive status and also negative status and play an important role to change the polarity of sentiment of text. As its play an important role, we have to care about this fact in our thesis works. We normalize that status by the help of linguistic analysis E. Linguistic analysis Every language has its own rules for combining words to create sentences. Syntactic analysis attempts to define and describe the rules that speakers use to put words together to create meaningful phrases and sentences. Bengali sentence is divided into three classes a) Simple b) Complex c) Compound. We normalized our status by considering some rules that are define in Bengali Grammars [10]. For this normalization we build manually a corpus of Synonym- Antonym of Bengali words. Corpus has 1200 unique words. A small portion of this corpus given below:- TABLE-III. Showing Word and Antonym of this word (a small portion) Word Antonym Following method we apply to normalize a</s>
<s>status:-  Detect simple sentence with one adjective.  Find etc ” valance shifting word placed at the right side of a simple sentence.  Remove the valance shifting word.  Replace the adjective with its antonym [10]. TABLE-IV. Applying Normalization Before Normalization After Normalization F. Unigram and Bigram The texts consist of sentences and also sentences consist of words. Human being can easily understand linguistic structures and their meanings, but machines are not enough smart to successful on natural language comprehension yet. So, we try to teach some languages to machines like as an elementary school kid. We used Unigram and Bigram as features in Naïve Bayes classification model. A unigram is just one single word. But a bigram is a word pairs. The bigrams within a sentence are all possible word pairs formed from neighboring words in the sentence. IV. PROPOSED ALGORITHM Bayes' Theorem is a theorem of probability theory originally stated by the Reverend Thomas Bayes. We use Laplace (add-1) Smoothing for Naïve Bayes. Our target to give a document d in a class c* = arg maxc P (c|d). Naive Bayes (NB) classier by first observing that by Bayes' rule ( | ) ( | ) ( ) ( ) (1) Where P(d) plays no role in selecting c*. To estimate the term P (d|c), Naive Bayes decomposes it by assuming the fi’s are conditionally independent given d’s class ( | ) ( )(∏ ( | ) ( ) (2) Our training method consists of relative-frequency estimation of P (c) and P (fi|c), using add-one smoothing. Polarity of Facebook status has been calculated by following method:- 1) Input a set of Positive and Negative status 2) Perform preprocessing at this set of data 3) Stemming every words from this set 4) Detect the simple sentence with one adjective, if valance shifting words occur in this sentence apply the Linguistic analysis method describe above 5) Count Unigram and Bigram of the data words 6) Measures the Prior probability and conditional probability. 7) Apply Laplace smoothing on this data and learn the parameters 8) From query text d choose ( | ) (3) 9) Maximum score from a class between two classes are our desire output class. V. RESULT EVALUATION In this article, we used Naïve Bayes Classification model to classify Facebook status in positive or negative which gives us a satisfactory result with f-score 0.72. We collected around 2000 status update from 70 users. K. M. Azharul Hasan Mosiur Rahman, Badiuzzaman [4] design a model by using Naïve Bayes where Bengali sentences were translated to English by using API and used WorldNet, SentiWordNet API. TABLE-I contains the corpus statistics. Fig.1 is shown positive and negative status count for training and testing phase. Classifier accuracy is shown in terms of precision, recall and F-score. TABLE-V. Result Evaluation Method Precision Recall F-score Naïve Bayes with Unigram 0.65 0.56 0.60 Naïve Bayes and Bigram 0.77 0.68 0.72 Fig.2 Naïve Bayes (NB) Classification Result Analysis with Unigram & Bigram VI. ACKNOWLEDGMENT</s>
<s>We wish to express our profound sense of gratitude to our supervisor Assistant Professor Md. Saiful Islam for introducing us to this research topic and providing his valuable guidance and unfailing encouragement throughout the course of the work. Thanks SUST-NLP Research members for helping us to collect data. We collected data from Bengali first Search Engine Pipilika [16]. We are immensely grateful to them for their constant advice and support for successful completion of this work. VII. CONCLUSION Sentiment analysis is the most interesting and newly emerged research topic. It will open a new door for the writers, bloggers, and businessman. One can easily know the percentage of product acceptance and make their strategy to improve the product quality. We used several supervised machine learning methods. It gives us approximately satisfactory accuracy. Our model runs on a small dataset. In future, data corpus can be enhanced and improve processed algorithm to achieved better accuracy. The approach presented here is flexible and suggests promising avenues for further research. VIII. REFERENCES [1] Sneha Mulatkar , Sentiment Classification In Hindi, International Journal of Scientific and Technology Research Volume 3, Issue 5, May 2014 [2] wikipedia.org, ‘Bengali Language’, [Online]. Available: https://en.wikipedia.org/wiki/Bengali_language [Accessed: Retrieved April 21, 2015] [3] Amitava Das, Sivaji Bandopadaya, SentiWordnet for Bangla, Knowl-edge Sharing Event -4: Task, Volume 2,2010 [4] K M Azharul Hasan, Mir Shahriar Sabuj, Zakia Afrin (2015) Opinion Mining using Naïve Bayes In: IEEE International WIE Conference on Electrical and Computer Engineering (WIECON-ECE) pp. 511-514, IEEE. [5] Shaika Chowdhury, Wasifa Chowdhury, "Performing sentiment analysis in Bangla microblog posts", ICIEV, 2014, 2014 International Conference on Informatics, Electronics & Vision (ICIEV), 2014 International Conference on Informatics, Electronics & Vision (ICIEV) 2014, pp. 1-6, doi:10.1109/ICIEV.2014.6850712 [6] Amandeep Kaur, Vishal Gupta, A Survey on Sentiment Analysis and Opinion Mining Techniques, Journal of Emerging Technologies in Web Intelligence, Vol. 5, No. 4, 2013 [7] Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. Thumbs up? Sentiment classification using machine learning techniques. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7986, 2002 [8] Farah Benamara, Carmine Cesarano, Antonio Picariello, Diego Reforgiato, and V.S.Subrahmanian. Sentiment analysis: Adjectives and adverbs are better than adjectives alone. In Proceedings of the International Conference on Weblogs and Social Media (ICWSM), 2007. 0.20.40.60.8Precision Recall F-ScoreNB with Unigram NB with Bigramhttps://en.wikipedia.org/wiki/Bengali_language[9] nltr.org ‘snltr-software’ [Online]. Available: http://nltr.org/snltr-software/ [Accessed: Retrieve April 3, 2015] [10] banglaacademy.org, , [Online]. Available: http://www.ebanglalibrary.com/banglagrammar/ [Accessed: Retrieve April 3, 2015] [11] Peter Turney. Thumbs up or thumbs down? Semantic orientation applied to unsupervised classification of reviews, pages 417424, 2002. [12] Sentiment classification based on supervised latent n-gram analysis Bespalov, Dmitriy, et al., 2011 [13] T. Joachims, Making large-Scale SVM Learning Practical. Advances in Kernel Methods-Support Vector Learning, B. Schlkopf and C. Burges and A. Smola (ed.), MIT-Press, 1999 [14] Bespalov, Dmitriy, et al. ”Sentiment classification based on supervised latent n-gram analysis.” Proceedings of the 20th ACM international conference on Information and knowledge management. ACM, 2011. [15] Kristina Toutanova, Dan Klein, Christopher Manning, and oram Singer.</s>
<s>”FeatureRich art-of-Speech agging with a Cyclic Dependency etwork.” In roceedings of AAC, pp. 252-259 [16] Bengali Search Engine Pipilika, Available at: http://www.pipilika.com/ View publication statsView publication statshttps://www.researchgate.net/publication/313925459</s>
<s>Ishara-Lipi: The First Complete MultipurposeOpen Access Dataset of Isolated Characters for Bangla Sign LanguageSee discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/329396096Ishara-Lipi: The First Complete MultipurposeOpen Access Dataset of IsolatedCharacters for Bangla Sign LanguageConference Paper · September 2018DOI: 10.1109/ICBSLP.2018.8554466CITATIONSREADS1,0185 authors, including:Some of the authors of this publication are also working on these related projects:Sequence-to-sequence Bangla Sentence Generation with LSTM Recurrent Neural Networks View projectIshara-Bochon: The First Multipurpose Open Access Dataset for Bangla Sign Language Isolated Digits View projectMd. Sanzidul IslamDaffodil International University18 PUBLICATIONS 40 CITATIONS SEE PROFILESadia Sultana SharminDaffodil International University7 PUBLICATIONS 20 CITATIONS SEE PROFILEAkm Shahariar Azad RabbyDaffodil International University30 PUBLICATIONS 54 CITATIONS SEE PROFILEAll content following this page was uploaded by Akm Shahariar Azad Rabby on 09 January 2019.The user has requested enhancement of the downloaded file.https://www.researchgate.net/publication/329396096_Ishara-Lipi_The_First_Complete_MultipurposeOpen_Access_Dataset_of_Isolated_Characters_for_Bangla_Sign_Language?enrichId=rgreq-c06da54a554c8eda249ba2448724d0a8-XXX&enrichSource=Y292ZXJQYWdlOzMyOTM5NjA5NjtBUzo3MTMxNjI4OTA0MTYxMjhAMTU0NzA0MjY5MTExMw%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/329396096_Ishara-Lipi_The_First_Complete_MultipurposeOpen_Access_Dataset_of_Isolated_Characters_for_Bangla_Sign_Language?enrichId=rgreq-c06da54a554c8eda249ba2448724d0a8-XXX&enrichSource=Y292ZXJQYWdlOzMyOTM5NjA5NjtBUzo3MTMxNjI4OTA0MTYxMjhAMTU0NzA0MjY5MTExMw%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Sequence-to-sequence-Bangla-Sentence-Generation-with-LSTM-Recurrent-Neural-Networks?enrichId=rgreq-c06da54a554c8eda249ba2448724d0a8-XXX&enrichSource=Y292ZXJQYWdlOzMyOTM5NjA5NjtBUzo3MTMxNjI4OTA0MTYxMjhAMTU0NzA0MjY5MTExMw%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Ishara-Bochon-The-First-Multipurpose-Open-Access-Dataset-for-Bangla-Sign-Language-Isolated-Digits?enrichId=rgreq-c06da54a554c8eda249ba2448724d0a8-XXX&enrichSource=Y292ZXJQYWdlOzMyOTM5NjA5NjtBUzo3MTMxNjI4OTA0MTYxMjhAMTU0NzA0MjY5MTExMw%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-c06da54a554c8eda249ba2448724d0a8-XXX&enrichSource=Y292ZXJQYWdlOzMyOTM5NjA5NjtBUzo3MTMxNjI4OTA0MTYxMjhAMTU0NzA0MjY5MTExMw%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Islam939?enrichId=rgreq-c06da54a554c8eda249ba2448724d0a8-XXX&enrichSource=Y292ZXJQYWdlOzMyOTM5NjA5NjtBUzo3MTMxNjI4OTA0MTYxMjhAMTU0NzA0MjY5MTExMw%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Islam939?enrichId=rgreq-c06da54a554c8eda249ba2448724d0a8-XXX&enrichSource=Y292ZXJQYWdlOzMyOTM5NjA5NjtBUzo3MTMxNjI4OTA0MTYxMjhAMTU0NzA0MjY5MTExMw%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Daffodil_International_University?enrichId=rgreq-c06da54a554c8eda249ba2448724d0a8-XXX&enrichSource=Y292ZXJQYWdlOzMyOTM5NjA5NjtBUzo3MTMxNjI4OTA0MTYxMjhAMTU0NzA0MjY5MTExMw%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Islam939?enrichId=rgreq-c06da54a554c8eda249ba2448724d0a8-XXX&enrichSource=Y292ZXJQYWdlOzMyOTM5NjA5NjtBUzo3MTMxNjI4OTA0MTYxMjhAMTU0NzA0MjY5MTExMw%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Sadia_Sultana_Sharmin?enrichId=rgreq-c06da54a554c8eda249ba2448724d0a8-XXX&enrichSource=Y292ZXJQYWdlOzMyOTM5NjA5NjtBUzo3MTMxNjI4OTA0MTYxMjhAMTU0NzA0MjY5MTExMw%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Sadia_Sultana_Sharmin?enrichId=rgreq-c06da54a554c8eda249ba2448724d0a8-XXX&enrichSource=Y292ZXJQYWdlOzMyOTM5NjA5NjtBUzo3MTMxNjI4OTA0MTYxMjhAMTU0NzA0MjY5MTExMw%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Daffodil_International_University?enrichId=rgreq-c06da54a554c8eda249ba2448724d0a8-XXX&enrichSource=Y292ZXJQYWdlOzMyOTM5NjA5NjtBUzo3MTMxNjI4OTA0MTYxMjhAMTU0NzA0MjY5MTExMw%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Sadia_Sultana_Sharmin?enrichId=rgreq-c06da54a554c8eda249ba2448724d0a8-XXX&enrichSource=Y292ZXJQYWdlOzMyOTM5NjA5NjtBUzo3MTMxNjI4OTA0MTYxMjhAMTU0NzA0MjY5MTExMw%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Akm_Shahariar_Azad_Rabby?enrichId=rgreq-c06da54a554c8eda249ba2448724d0a8-XXX&enrichSource=Y292ZXJQYWdlOzMyOTM5NjA5NjtBUzo3MTMxNjI4OTA0MTYxMjhAMTU0NzA0MjY5MTExMw%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Akm_Shahariar_Azad_Rabby?enrichId=rgreq-c06da54a554c8eda249ba2448724d0a8-XXX&enrichSource=Y292ZXJQYWdlOzMyOTM5NjA5NjtBUzo3MTMxNjI4OTA0MTYxMjhAMTU0NzA0MjY5MTExMw%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Daffodil_International_University?enrichId=rgreq-c06da54a554c8eda249ba2448724d0a8-XXX&enrichSource=Y292ZXJQYWdlOzMyOTM5NjA5NjtBUzo3MTMxNjI4OTA0MTYxMjhAMTU0NzA0MjY5MTExMw%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Akm_Shahariar_Azad_Rabby?enrichId=rgreq-c06da54a554c8eda249ba2448724d0a8-XXX&enrichSource=Y292ZXJQYWdlOzMyOTM5NjA5NjtBUzo3MTMxNjI4OTA0MTYxMjhAMTU0NzA0MjY5MTExMw%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Akm_Shahariar_Azad_Rabby?enrichId=rgreq-c06da54a554c8eda249ba2448724d0a8-XXX&enrichSource=Y292ZXJQYWdlOzMyOTM5NjA5NjtBUzo3MTMxNjI4OTA0MTYxMjhAMTU0NzA0MjY5MTExMw%3D%3D&el=1_x_10&_esc=publicationCoverPdfInternational Conference on Bangla Speech and Language Processing(ICBSLP), 21-22 September, 2018Ishara-Lipi: The First Complete MultipurposeOpenAccess Dataset of Isolated Characters forBanglaSign LanguageMd. Sanzidul IslamDept. of Computer Science & EngineeringDaffodil International UniversityDhaka, Bangladeshsanzidul15-5223@diu.edu.bdSadia Sultana Sharmin MousumiDept. of Computer Science & EngineeringDaffodil International UniversityDhaka, Bangladeshsadia15-5191@diu.edu.bdNazmul A. JessanDept. of Computer Science & EngineeringDaffodil International UniversityDhaka, Bangladeshnazmul15-4668@diu.edu.bdAKM Shahariar Azad RabbyDept. of Computer Science & EngineeringDaffodil International UniversityDhaka, Bangladeshazad15-5424@diu.edu.bdSayed Akhter HossainDept. of Computer Science & EngineeringDaffodil International UniversityDhaka, Bangladeshaktarhossain@daffodilvarsity.edu.bdAbstract—Collecting hand gesture data for sign language istoo much difficult to researchers. Ishara-Lipi, the first completeisolated characters dataset of Bangla Sign Language (BdSL) isconducted in this article. It will help to increase interactionbetween hearing impaired community and general people. Thedataset contains 50 sets of 36 Bangla basic sign characters,collected by the help of different deaf and general volunteers. InBangla Sign Language sign characters there have 6 vowels and30 consonants by which they can finger spell all Bangla words. InIshara-Lipi dataset, after discarding mistakes and preprocessing,1800 character images of Bangla Sign Language were includedin the final state. This dataset could be used to develop computervision based or any kind of system that approves users to searchthe meaning of BdSL sign.Index Terms—Bangla Sign Language, Computer vision, OpenSource, Sign language characters, BdSL, NLP, Machine Learn-ing, Pattern Recognition, Sign Language DatasetI. INTRODUCTIONBangladesh is 8th populated country in the world wherealmost 16cr people are live. Among them almost 2.6 millionpeople are deaf and mute. Bangla is our mother tongueby-born but deaf and mute people canFLt understand ourlanguage what general people use. Sign language is widelyused medium among deaf and mute people to communicatewith general people. When a baby is born he/she learn his/herfirst word from surroundings. At first he/she hear the wordthan memorize it and then try to express it. But a deaf andmute baby cant hear anything. For this reason deaf and dumbpeople using different types of body gestures to create differentsigns. There are few books from where deaf and mute peoplecan learn Bangla sign language. One of them is Bengali SignLanguage Dictionary published by National Centre for SpecialEducation Ministry of Social Welfare in 1974. There aresome originations who are working for deaf and mute peoplein Bangladesh. Centre for Disability in Development (CDD)which was established in 1996 to develop a more friendlysociety</s>
<s>for disabled persons.Ishara Bhasay Jogajog a book of Bangla Sign Language wasprinted by CDD in 2005 and reprinted in 2015 for learningbasic gesture of BdSL. According to CDD standard BanglaSign Language, there are 36 isolated characters and 10 digitsin Bangla Sign Language.In Bangladesh there is no open access complete dataset forBangla Sign characters for research work and development. Inthis mindset we are working to develop an open access datasetof isolated characters for Bangla Sign Language. The Ishara-Lipi dataset will help deaf and hearing impaired communityby ensuring them to develop education tools.II. LITERATURE REVIEWThere are many datasets for sign language character recog-nition. Researchers have been working on sign language recog-nition systems using datasets. Because of characters datasets978-1-5386-8207-4/18/$31.00 2018 IEEEInternational Conference on Bangla Speech and Language Processing(ICBSLP), 21-22 September, 2018Fig. 1. Bangla sign language digits.is associated with sign language recognition for any research.As various datasets remain in the world, researchers attemptto gather their own dataset with their personal standard andenvironmental state to expand and act SL recognition on theirdataset.Research in SL recognition has been used for different signlanguages in different countries. There are many research, inSL character recognition only described the output of processand did not contain the wide dataset in the works.A significant volume of action has been done to create SLcharacter recognition. The American Sign Language LexiconVideo Dataset [1] (Neidle and Vogler, 2012) forms such alexicon for American Sign Language (ASL), containing morethan 3000 signs in multiple video views. The Argentinian SignLanguage (LSA) paper offering a dataset of 64 signs. (LSA64,Franco and Facundo) This dataset contains 3200 videos of 64different LSA signs recorded by 10 subjects.The Arabic Sign Language recognition paper [3] contains30 manual alphabets signs (Omar Al-Jarrah and Alaa Ha-lawani,2001). In Chinese SLR [2] has 120 signs dataset (2012,Yun Li and Xiang Chen).An ASL dataset has more complex for recognition becauseof these data are video files so that these sign have differentmotions. In sign language recognition one of the authentic partis to develop a Sign Language dataset. Many countries datasetof the SL exits and a BdSL dataset till now not present. Thisdataset contains 50 sets of 36 Bangla characters sign. There isremarkable benefit for research in BdSL character recognitionwith the presence of an open dataset.III. DATA COLLECTION AND PREPROCESSINGProposed Data collection and preprocessing methods hasbeen stated in figure 2.There are six different states assignedto perform the entire process.A. Capturing imageA comparatively wide sized BdSL image dataset has beenarranged in this paper. The dataset contains total 1800 image.We had collect data from many Deaf School Community andwe took images of uncovered hands in white backgrounds.A conducive resolution camera keeping a stable resolution isconduct to capture the images.Fig. 2. Working flow of whole process.B. Labeling dataThis move is essential to minimize the noise achieve in thecapturing process and to improve the images quality. Some ofBdSL character sign vary little from another. Images of thiskind of characters seem intimate and enhance the exceptionof experiment. So we categorized the characters individual.Different characters has different classes/folders.Bangla Language has about 49 characters but in sign</s>
<s>languageits not. In Bangla Sign Language there has only 36 charactersat all. So we kept 36 characters by naming with numericconvention from 1 to 36 (1, 2, 3...9). The naming conventionof all characters are given below in a chart.C. Cropping imagesThe captured image cannot be used without cropping for anycharacter recognition purpose. Cropping is necessary to makethe image for further experiment. The images are croppedobserving the rate of height and width for future processing.Images are cropping to show the hand region.Fig. 3. Cropping image.D. Resizing image and convert to gray scaleFor making Ishara-Lipi dataset usable in machine learning,deep learning or computer vision based works it is resized in astandard format by python cv2 script. We made a script whatwill go in every folders containing images and take the sameaction in every images. The script resize the image by 128 *128 pixels first and then convert RGB to gray scale.978-1-5386-8207-4/18/$31.00 2018 IEEEInternational Conference on Bangla Speech and Language Processing(ICBSLP), 21-22 September, 2018Fig. 4. Cropped images.Fig. 5. Final gray scale images.IV. DATASET PROPERTIES• Final dataset contains total 1800 images (36 * 50 =1 800).• Here in 36 characters- 6 Bangla vowels and 30 conso-nants.• Every image is resized by 128*128 pixels.• The images is captured then resized and finally convertedto gray scale from RGB.• Ishara-Lipi dataset contains 36 folders of images labeledby numbers (1, 2, 3. . . 36) as sequentially presented asBangla sign characters sequence.• All images are kept in .jpg formatted datatype.V. POSSIBLE USAGE OF ISHARA-LIPI DATASET• It could be broadly used in data science area for makingartificial models.• Ishara-Lipi dataset may keep a significant part in researchand development works in BdSL.• People may take Ishara-Lipi as a virtual standard formatof BdSL dataset as its the first one.VI. NAMING CONVENTION AND DATA REPOSITORYNaming convention is an important fact for using the dataproperly. So made Ishara-Lipi dataset in simple and friendlystandard naming convention. There has 36 folders labeled innumbers from 1 to 36.Fig. 6. Numeric representation of BdSL characters.The folder name is as following numbers and every foldercontains 50 images or each character. Different images ofcharacters naming in .jpg extension of file type, like- 1 01.jpg,1 02.jpg, 1 03.jpg. 1 04.jpg and so on.VII. MODEL CONSTRUCTIONThe Eshara-Lipi dataset provides 128*128 pixels grayscaleimages. For making this model did some reprocessing workslike- convert grayscale image to binary and threshold. Themethod we used determines the threshold automatically fromthe image using Otsu’s method.Algorithm 1:1: ADAM (Learning Rate)2: For 30 iterations in all batch do:3: Convolution 1 (Filter, Kernel Size, Stride, Padding,Activation)4: Convolution 2 (Filter, Kernel Size, Stride, Padding,Activation)5: MaxPool 1 (Pool Size)6: Dropout (Rate)7: Convolution 3 (Filter, Kernel Size, Stride, Padding,Activation)8: Convolution 4 (Filter, Kernel Size, Stride, Padding,Activation)9: MaxPool 2 (Pool Size)10: Dropout (Rate)11: Dense (Units, Activation, Kernel initializer, BiasInitializer)12: Dropout (Rate)13: Dense (Units, Activation, Kernel initializer, BiasInitializer)14: end forProposed Model in this paper use ADAM optimizer witha learning rate of 0.001. The model has 9-layer CNN. Forconvolution 1 and 2, where filter size is 32, kernel size is(5x5), Stride is (1x1), same padding with</s>
<s>ReLU (1) activation.Followed by a 5 x 5 max-pooling layer. Then used 25%dropout to reduce overfitting.ReLu(x) = Max(0, x) (1)For convolution 3 and 4, the filter is 64, kernel size is (3x3),Stride is (1x1), same padding with ReLU activation. Followed978-1-5386-8207-4/18/$31.00 2018 IEEEInternational Conference on Bangla Speech and Language Processing(ICBSLP), 21-22 September, 2018by a 2 x 2 max-pooling layer. Then used 25% dropout. Thenflatten the layer and use a Dense layer with 256 units withReLU activation and 50 % dropout. At final output layer, used36 units with SoftMax (2) activation. Fig 3 is showing theneural network architecture.S(yi) =eyi∑j e(2)Fig. 7. Bangla sign language processed data.A. Model Optimization and Learning RateThe choice of optimization algorithm can make a sufficientchange for the result in Deep Learning and computer visionwork. The Adam paper says, ”...many objective functions arecomposed of a sum of subfunctions evaluated at differentsubsamples of data; in this case, optimization can be mademore efficient by taking gradient steps w.r.t. individualsub-functions ...”. The Adam optimization algorithm isan extension to stochastic gradient descent that recentlyadopting most of the computer vision and natural languageprocessing application. The method computes individualadaptive learning rates for different parameters from estimatesof first and second moments of the gradients.Proposed method used ADAM Optimizer with learning rate= 0.001.When using a neural network to perform classificationand prediction task. A recent study shows that cross entropyfunction performs better than classification error and meansquare error. Cross-entropy error, the weight changes dontget smaller and smaller and so training isnt s likely to stallout. Proposed method used categorical cross entropy (3) asloss function.Li =ti,j log(pi,j) (3)To make the optimizer converge faster and closer to theglobal minimum of the loss function, using an automaticLearning Rate reduction method. Learning rate is the step bywhich walks through the minimum loss. If higher learning rateuse it will quickly converge and stuck in a local minimuminstead of global minima. To keep the advantage of the fastcomputation time with a high Learning Rate, after each epochmodel dynamically decreases the learning rate by monitoringthe validation accuracy.VIII. MODEL EVALUATIONFor Ishara-Lipi sign character database, through data aug-mentation 15% data kept for testing and 85% data keptfortraining. For Ishara-Lipi sign character database, after 50epoch model gets92.65% accuracy on the training set and94.74% accuracy on the validation set.Fig 4 shows the lossvalue and accuracy of the training set and the validation.Fig. 8. Bangla sign language processed data.IX. CONCLUSION AND FUTURE WORKIn this paper we have presented a complete dataset namedEshara-Lipi containing all characters. This dataset is publiclyobtainable. By providing all characters that could be used fortraining and testing in data science. We presume that thisdataset will be a great resource for researchers in BanglaSign Language recognition. At the same time, this dataset canbe useful for computer vision and machine learning methodsdesigned for learning signs and approaches for analyzinggesture. We believe that this dataset will be an effectiveresource for all users, learners, researchers of Bangla SignLanguage and we prospect that the compatibility of the BdSLdataset will inspire and aid other researchers to study thedifficulty of sign language recognition, gesture</s>
<s>recognition.As working in data science here data is the most major factfor working and making effective model. So increasing thedataset size will be the future work for this project. As it isan open source project anyone can contribute us voluntarily.REFERENCES[1] Athitsos, Vassilis, et al. ”The american sign language lexicon videodataset.” Computer Vision and Pattern Recognition Workshops, 2008.CVPRW’08. IEEE Computer Society Conference on. IEEE, 2008.[2] Li, Yun, et al. ”A sign-component-based framework for Chinese signlanguage recognition using accelerometer and sEMG data.” IEEE trans-actions on biomedical engineering 59.10 (2012): 2695-2704.[3] Al-Jarrah, Omar, and Alaa Halawani. ”Recognition of gestures in Arabicsign language using neuro-fuzzy systems.” Artificial Intelligence 133.1-2(2001): 117-138.[4] Forster, Jens, et al. ”Extensions of the Sign Language Recognition andTranslation Corpus RWTH-PHOENIX-Weather.” LREC. 2014.[5] Nanivadekar, Purva A., and Vaishali Kulkarni. ”Indian Sign LanguageRecognition: Database Creation, Hand Tracking and Segmentation.” Cir-cuits, Systems, Communication and Information Technology Applications(CSCITA), 2014 International Conference on. IEEE, 2014.978-1-5386-8207-4/18/$31.00 2018 IEEEView publication statsView publication statshttps://www.researchgate.net/publication/329396096</s>
<s>4-FCS-17253-MAR.dviFront. Comput. Sci., 2020, 14(3): 143302https://doi.org/10.1007/s11704-018-7253-3Bangla language modeling algorithm for automatic recognitionof hand-sign-spelled Bangla sign languageMuhammad Aminur RAHAMAN , Mahmood JASIM, Md. Haider ALI,Md. HASANUZZAMANDepartment of Computer Science and Engineering, University of Dhaka, Dhaka-1000, Bangladeshc© Higher Education Press and Springer-Verlag GmbH Germany, part of Springer Nature 2019Abstract Because of using traditional hand-sign segmenta-tion and classification algorithm, many diversities of Banglalanguage including joint-letters, dependent vowels etc. andrepresenting 51 Bangla written characters by using only 36hand-signs, continuous hand-sign-spelled Bangla sign lan-guage (BdSL) recognition is challenging. This paper presentsa Bangla language modeling algorithm for automatic recog-nition of hand-sign-spelled Bangla sign language which con-sists of two phases. First phase is designed for hand-sign clas-sification and the second phase is designed for Bangla lan-guage modeling algorithm (BLMA) for automatic recogni-tion of hand-sign-spelledBangla sign language. In first phase,we have proposed two step classifiers for hand-sign classi-fication using normalized outer boundary vector (NOBV)and window-grid vector (WGV) by calculating maximuminter correlation coefficient (ICC) between test feature vectorand pre-trained feature vectors. At first, the system classifieshand-signs using NOBV. If classification score does not sat-isfy specific threshold then another classifier based on WGVis used. The system is trained using 5,200 images and testedusing another (5, 200 × 6) images of 52 hand-signs from 10signers in 6 different challenging environments achievingmean accuracy of 95.83% for classification with the compu-tational cost of 39.972 milliseconds per frame. In the SecondPhase, we have proposed Bangla language modeling algo-rithm (BLMA) which discovers all “hidden characters” basedon “recognized characters” from 52 hand-signs of BdSL toReceived July 18, 2017; accepted August 24, 2018E-mail: aminur.wg@gmail.commake any Bangla words, composite numerals and sentencesin BdSL with no training, only based on the result of firstphase. To the best of our knowledge, the proposed systemis the first system in BdSL designed on automatic recogni-tion of hand-sign-spelled BdSL for large lexicon. The sys-tem is tested for BLMA using hand-sign-spelled 500 words,100 composite numerals and 80 sentences in BdSL achievingmean accuracy of 93.50%, 95.50% and 90.50% respectively.Keywords Bangla sign language (BdSL), hand-sign, clas-sification, Bangla language modeling rules (BLMR), Banglalanguage modeling algorithm (BLMA)1 IntroductionLike the spoken language, sign language (SL) is a separatelanguage with its own grammar and rules which is used byspeech and/or hearing impaired people to communicate withnon-sign people and themselves. SL is a part of the cultural,social, historical and religious heritage. Approximately 7% ofthe world’s populations use SL as their first language [1, 2].Almost 2.6 million sign people are living in Bangladesh [3]and they use Bangla sign language (BdSL) to communicate.The recognition of continuous, natural signing in BdSLis challenging, in terms of both video analysis and linguis-tics. Nowadays a significant goal is to achieve the real-timeSL recognition in naturalistic scenarios where occlusions, il-lumination changes and cluttered background are handled.Computer vision based image processing methods present abackward compatible, user friendly and robust solution to the2 Front. Comput. Sci., 2020, 14(3): 143302SL recognition problem in real time [4]. So, the demand ofcomputer vision based real-time continuous SL recognitionresearch is increasing rapidly.The Bangla sign language dictionary [5] uses 36 (6 vow-els and 30 consonants</s>
<s>as shown in Fig. 1(a) and Fig. 1(b))two-handed Bangla sign alphabet from 51 Bangla written al-phabet based on pronunciation, and 10 basic numerals (0 to9) as shown in Fig. 1(c). But, most of this word and sentencelevel signs are gestures. In BdSL, about 5,000 set of ges-tures [6] are used to express sign words and sentences whichare mostly impossible to memorize for a human.Fig. 1 Example postures of 52 hand-signs in BdSL. (a) Example posturesof BdSL vowel signs; (b) example postures of BdSL consonant signs; (c)example postures of BdSL numeral signs; (d) example postures of specialsignsTo establish communication between sign and non-signpeople, there is a need to develop BLMA, abbreviation forBangla language modeling algorithm for automatic recogni-tion of hand-sign-spelled BdSL in real-time by discovering“hidden caracters” that are not in BdSL (described indetailsin Section 3) using only Bangla alphabet (36 letters) and basicnumerals (0 to 9) which will be able to make any words, com-posite numerals and sentences by hand-sign-spelling. Mem-orizing only 36 alphabet and 10 numeral signs is possible foranyone.The proposed system is the extension of our previoussystm [7, 8]. Our previous systms [7] was developed onlyfor hand-signs segmentation and classification using fuzzyrule based RGB (FRB-RGB) model and window-grid vector(WGV) analysis. But the proposed system is designed for au-tomatic recognition of hand-sign-spelled BdSL in real-timewhich contains two phases as shown in Fig. 2. In first phase,the system is designed for hand-sign classification e.g., indi-vidual sign letters classification trained with 52 hand-signs(6 vowels+30 consonants+10 numerals+6 special signs) inBdSL as shown in Fig. 1 using our previously used NOBV orvector contours (VC) [8] and WGV [7] in combination. Li etal. [9] developed a static gesture recognition system based onhigh-level features which was tested by hand digit gestures of0–9 accurately. Dong et al. [10] proposed a descriptor namedholons visual representation (HVR) which was a derivativemutational self-contained combination of global and local in-formation. Our previous system [8] using NOBV, e.g., vectorcontours (VC), could not recognize among the hand-signswhere outer contours are similar but inner shapes are dif-ferent. So, in this paper, we have combined the rotation,translation and scale invariant feature vector WGV [7] whichincludes not only outer contour but also the inner shape toovercome the limitation of our previous system [8]. In theproposed system, we have significantly improved the classi-fication module with the proposed two-steps classifier basedon NOBV [8] and WGV [7]. Dong et al. [11] proposed a dis-criminative light unsupervised learning network (DLUN) tocounter the image classification challenge. Carcia-Ceja andBrena [12] proposed an improved three-stage classifier foractivity recognition and Lee et al. [13] proposed a person-specific saliency system for the recognition of dynamic ges-tures using two-stage classifiers based on different features.But our proposed two-steps classifier is simple and time ef-ficient. By combining the two features NOBV and WGVin our proposed system, we have increased the recognitionaccuracy than previous systems in cluttered and dynamicbackground with illumination variation. Here, we have used6 special signs as shown in Fig. 1(d) to implement the secondphase. In second phase, the system is designed as BLMA forMuhammad Aminur RAHAMAN</s>
<s>et al. Bangla language modeling algorithm (BLMA) 3Fig. 2 Architecture of the proposed system, (a) block diagram and (b) details view4 Front. Comput. Sci., 2020, 14(3): 143302automatic recognition of hand-sign-spelled words, compos-ite numerals and sentences for large lexicon in BdSL with notraining, only based on the result of first phase (hand-signsclassification).BLMA is mainly a part of the natural language understand-ing (NLU). Mills et al. [14, 15] and Santoni and Pourabbas[16] worked on NLU. Because of using only 36 hand-signsto represent 51 Bangla written characters and other diver-sities of Bengla language including joint-letters, dependentvowels etc., implementation of BLMA is challenging. In thispaper, the implementation of the Second Phase for the pro-posed BLMA is the unique contribution for hand-sign-spelledBangla sign language recognition (BdSLR).The moving hand-sign detection and tracking in clutteredand dynamic background is challenging and most essentialpart in vision based hand-signs classification. More recently,various approaches have been proposed to detect the movingobjects [17, 18]. Dong et al. [19] used E-GrabCut for videoobject segmentation. Their another research [20] was donefor nature image segmentation. Li et al. [21] proposed anddeveloped an approach to segment streaming video in datanoises and/or corruptions affected environment. A real-timemoving hand-signs detection system was developed by Chenet al. [22], using motion, skin-color and edge detection. Alonet al. [23] proposed hand-signs detection performed in frontof moving and cluttered background by combining only skin-color and motion cues. Another system developed by Asaariet al. [24], presents an efficient method for hand detection andtracking using integration of adaptive Kalman filter (AKF)and Eigen hand method using skin-color and motion cues asthe main tracking features. The system developed by Khaledet al. [25] isolates the moving hand from the whole imageby subtracting the static background with continuous update.Instead of ROI processing, these systems process whole im-age to isolate hand-sign which increases the computationalcost (CC). Skin-color based hand segmentation and detec-tion is easy and invariant to different types of hand postures,scale, translation, and rotation changes [26]. But skin-colorbased segmentation method does not perform well under var-ious illumination conditions, cluttered and dynamic back-grounds [27]. For this reason, in this paper, we propose a so-lution to detect and track the hand-signs. After initializationof the ROI [27] adaptive Kalman filter (AKF) [24, 28–30] isapplied to track the ROIs considering all the hand-signs areperformed within the ROIs with cluttered and dynamic back-ground. Then the system extracts hand-signs as binary imageby segmented skin-color using a robust fuzzy rule based RGB(FRB-RGB) model from the ROI with specific motion whichis describe in details in our previous system [7].Notable researches on Bangla sign language recognition(BdSLR) [27, 31–34] have been done in the last few years.But most of the BdSLR systems were developed for onlysign alphabet and/or numbers recognition. As in the case ofseveral computer vision tasks and deep learning has also re-cently irrupted in SL recognition, achieving outstanding re-sults [35, 36]. Asadi-Aghbolaghi et al. [37] collected and re-viewed all deep learning methods for gesture recognition in-cluding their highlighting features, and advantages and chal-lenges. Liu et al. [38] and Varol et al. [39] used 3D fil-ters in</s>
<s>the convolutional layers of their deep learning model.Zhu et al. [40] used pyramidal 3d convolutional networksfor large-scale isolated gesture recognition using RGB depthdata. Wang et al. [41] used depth data of 2D networks forgesture recognition including dynamic depth image, dynamicdepth normal image and dynamic depth motion normal im-age. Xu [42] developed a real-time hand gesture recognitionand human-computer interaction system using convolutionalneural network (CNN) classifier which are the state-of-the-artin these research area.Our previous system [43] was developed for Bangla signwords recognition using 18 sign words achieving recogni-tion accuracy of 90.11%. But the system was applicable foronly word gestures recognition instead of continuous hand-sign-spelled words recognition. For large lexicon based BdSLrecognition, the system needs to train for each word gestures.Another system was developed by Park et al. [44] for Ko-rean finger-spelling recognition with similar ideas. Kane andKhanna [45] developed a system for finger-spelling recog-nition using depth sensors. The system was tested againstone-handed American sign language (ASL), NTU hand digitand two-handed Indian sign language (ISL) with 94.1% accu-racy. But these finger-spelling recognition systems are mainlycharacter sign recognition systems. The syntaxes to makewords and/or sentences using hand-signs-spelling were notused in these systems. Fang et al. [46] developed a continu-ous Chinese sign language (ChSL) recognition system withlarge vocabulary. The system was tested using 5113 Chinesesigns/sentences and obtained an average accuracy of 91.9%.Liwicki and Everingham [47] developed an automatic recog-nition of finger-spelled words in British sign language (BSL).The system was tested using 1,000 low quality webcamvideos of 100 words achieving with 98.9% accuracy. But thesystem is signer dependent and does not perform well in clut-tered background. More recently, Koller et al. [48] developedASL recognition using statistical approach handling multi-ple signers for a large vocabulary. The system performs wellMuhammad Aminur RAHAMAN et al. Bangla language modeling algorithm (BLMA) 5with large vocabulary databases signer independently.But thesystem was designed and tested for ASL gesture recognitionin plain background, not for finger-spelled words. Some sys-tems were developed for large lexicon finger-spelled or hand-sign-spelled words recognition in various SL such as [47] butthis can not be used for Bangla language as the structure ofBangla words and/or sentences is much different from otherlanguages. To the best of our knowledge, the proposed systemis the first system in BdSL designed on automatic recognitionof hand-sign-spelled BdSL for large lexicon using BLMAwhich is able to make any Bangla words, composite numeralsand sentences from only 52 hand-signs.The main contributions of the proposed system can besummarized as (1) Hand-signs classification (a two-stepsclassification technique is proposed based on WGV andNOBV for hand-signs classification instead of traditionalclassifier to achieve high accuracy and reduced computationalcost); and (2) We have proposed Bangla language modelingalgorithm (BLMA) to interpret the hand-sign-spelled BdSLinto Bangla words, composite numerals and sentences forlarge lexicon.The rest of the paper is organized as follows. Section 2describes the hand-signs classification. Section 3 describesthe proposed Bangla language modeling algorithm (BLMA)for automatic recognition of hand-sign-spelled BdSL by dis-covering ‘hidden characters’ that are not in BdSL. Section 4presents the experimental results with discussion. Finally, thepaper is concluded in Section 5.2 First phase: hand-signs classificationIn this</s>
<s>section, we describe the first phase of the proposedsystem in which the system classifies the hand-signs of indi-vidual characters in BdSL. Figure 2 presents the architectureof the proposed system. Although the first phase is alreadyimplemented in our previous systems [7, 8, 27], we have im-proved only the classifier in this phase. The system capturesimage sequence denoted by Imrgb(x, y) by using a USB or CCDcamera, where, m represents the sequence number. After ROIgeneration by detecting hand-signs using Haar classifier, thesystem tracks the ROI using AKF [24, 28–30]. Face areasubtraction, Hand-signs detection and ROI generation, skin-color segmentation using FRB-RGB model, probable binaryhand-signs extraction, and noise removal are described in de-tails in our previous system [7]. In this paper, we have pro-posed and implemented a two-step classifier based on NOBVand WGV. NOBV was used in our previous system [8] interms of vector contour (VC) using complex number “a+ ib”representation. WGV generation process is described in de-tail in our previous system [7]. After extracting the featurevectors NOBV and WGV, the system is trained for each signclass i (i = 1, 2, 3, . . . , n where, n = 52 is the number of signclasses) using the NOBV (Γ ji ) and WGV (Ω ji ) as Eq. (1) andEq. (2) where, j = 1, 2, 3, . . . , 100 respectively from 10 dif-ferent signers. The resulted training images for NOBV andWGV for each sign class, i are (10 × 10) + (10 × 10) = 200.i = [Γ11, Γ1, Γ1, . . . , Γ1001 ][Γ12, Γ2, Γ2, . . . , Γ1002 ][Γ13, Γ3, Γ3, . . . , Γ1003 ] · · · [Γ1n, Γn, Γn, . . . , Γ100n ],(1)i = [Ω11,Ω1,Ω1, . . . ,Ω1001 ][Ω12,Ω2,Ω2, . . . ,Ω1002 ][Ω13,Ω3,Ω3, . . . ,Ω1003 ] · · · [Ω1n,Ωn,Ωn, . . . ,Ω100n ].(2)In the training phase, the system combines two feature vec-tors NOBV(Γ ji ) and WGV (Ω ji ) for each sign class, i, whichis represented by Eq. (3).i = [Γ ji ,Ωi ]. (3)Here, NOBV (Γ ji ) and WGV (Ω ji ) are stored in two sep-arate sub-classes (sub-class i1 and sub-class i2 respectively)within each sign class, i. The structure of the combined fea-ture vector ξ ji = [Γ ji ,Ωi ] is shown in Fig. 3.The combined feature vector ξ ji of the 52 input hand-signs( , , , , , , , , , , , , , , , , , , , ,Fig. 3 Example structures of the combined feature vector6 Front. Comput. Sci., 2020, 14(3): 143302, , , , , , , , , , , , , , , , , , , , , ,, , , , S1, S2, S3, S4, S5, S6) are to be assigned to thesign class labels i = 1, 2, 3, . . . , 52 respectively in the trainingdatabase. Where, each class label contains j = 100 NOBVsand j = 100 WGVs.The size of each</s>
<s>combined feature vector ξ ji = [Γ ji ,Ωi ]for a single sign is [KΓ + MΩ] = [50 + 25] = 75. Hencethe size of ξ ji = [Γ ji ,Ωi ] for a single sign class, i will be[50 + 25] × 100 × 1 = 7500 where, j=100 and i=1. The re-sulted size of the combined feature vector ξ ji for 52 hand-signs is [KΓ +MΩ]× J × i = [50+ 25]× 100× 52 = 390, 000where according to the previous system [27] the size of thecombined feature vector considered as (M × N) × j × i =(150× 150)× 100× 52 = 117, 000, 000 which was more than300 times larger than the proposed system. As a result, theproposed system is capable of minimizing the CC.After training the system, the proposed two-steps classi-fication technique is applied to recognize BdSL by compar-ing with pre-trained feature vectors of hand-sign. At the firststep classifier selects KΩ most frequent sign classes (where7 � KΩ � 3 for better performance decided by observingthe graph as presented in Section 4) based on calculating theKΩ most similarity between pre-trained NOBV (Γ ji ) and testNOBV (Γ) using Eq. (4).ICCΓ_ max(KΩ) =⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩Classi1, ifICCΓ (η)|Γ j1||Γ|� THICC ,Classi2, ifICCΓ (η)|Γ j2||Γ|� THICC ,Classi3, ifICCΓ (η)|Γ j3||Γ|� THICC ,......ClassiKΩ, ifICCΓ (η)|Γ jn||Γ|� THICC ,(4)where ICCΓ_ max(KΩ) represents the KΩ most frequent signclasses based on the KΩ most ICCΓ_ max between pre-trainedi and test Γ. In this system we have used KΩ = 3 forthe best performance which is decided from the observationgraph as presented in Section 4. So that the system selects3 sign classes from the pre-rained database of Γ ji using theEq. (4).ICCΓ (η)|Γ ji ||Γ|measures the maximum Inter-CorrelationCoefficient (ICC) between pre-trained Γ ji and test Γ with thevalue among 0 to 1; and returns the sign class i. Where, |Γ ji |and |Γ| represent the normalized length of pre-trained Γ ji andtest Γ respectively. ICCΓ(η) is the ICC between pre-trained Γ jand test Γ which is calculated using Eq. (5) [8]. Where, Γ(η)represents a outer boundary vector point received from test Γby cycle shift by its vector point γη on “η” of elements.ICCΓ(η) = (Γ ji , Γ(η)) (5)If the ICCΓ_ max satisfies the maximum similarity of thethreshold (THICC = 0.85) value (ICCΓ_ max � 0.85) then thesystem returns the sign classes iKΩ. Note that the thresholdvalue (THICC = 0.85) is selected by observing the graph asshown in Fig. 4 using selected offline sample of hand-signs.From Fig. 4, the similarity rate is increased with respect to thethreshold (THICC) but the number of candidate (NOC) selec-tion is decreased and the number of recognition drop (NORD)is increased from the value of THICC > 8.5. After selectingthe KΩ most sign classes, the system checks the selected signclasses whether they are same classes or not. If selected signclasses are same (Classi1 = Classi2 = ClassiKΩ = i) thenthe system return the sign class i and recognizes the hand-signlabeled with the sign class i. Otherwise, confusion is raisedand the</s>
<s>system sends the selected different KΩ most signclasses (i1, i2, . . . , iKΩ) into the 2nd step classifier. The 2ndstep classifier selects one of the sign classes among the KΩmost sign classes (i1, i2, . . . , iKΩ) based on maximum ICCbetween pre-trainedΩ ji and test Ω using Eq. (6).ICCΩ_ max =⎛⎜⎜⎜⎜⎜⎝ ICCΩ(η)|Ω ji ||Ω|⎞⎟⎟⎟⎟⎟⎠ , (6)where, ICCΩ_ max measures the maximum ICC between pre-trained WGV (Ω ji ) and test WGV (Ω) with the value among0 to 1. |Ω ji | and |Ω| represent the mean value of pre-trainedi and test Ω respectively. ICCΩ(η) is the ICC between pre-trained Ω ji and test Ω which is calculated using Eq. (7).Where, Ω(η) represents the ηth Window-Grid (ωη) of theWGM for each hand-sign received from testΩ ji by cycle shiftby its Window-Grid (ωη) on “η” of elements.ICCΩ(η) = (Ω ji ,Ω(η)). (7)Fig. 4 Performance rates for different threshold (THICC) valuesIf the ICCΩ_ max satisfies the maximum similarity of thethreshold (THICC = 0.85) value (ICCΩ_ max � 0.85) thenthe system returns the sign class i from the sign classesi1, i2, . . . , iKΩ. Most of the hand-signs classifications areMuhammad Aminur RAHAMAN et al. Bangla language modeling algorithm (BLMA) 7done using only 1st step classifier based on only NOBV. The2nd step classifier is used only for the hand-signs classifica-tions based on WGV in which the NOBVs of those had-signsare similar.3 Second phase: Bangla language modelingalgorithm (BLMA)In this section, we propose Bangla language modeling al-gorithm (BLMA) for automatic recognition of hand-sign-spelled BdSL by discovering “hidden characters” from “rec-ognized characters”. Bangla “hidden characters” are thosecharacters which are not present in BdSL and “recognizedcharacters” are those characters which are already recognizedfrom BdSL using the first phase (hand-sign classification) ofour proposed system. As presented in Algorithm 1, for eachcharacter, word, composite numeral and sentence states theinput is κi which are recognized from each hand-sign Xi; andXm is the position of κi in the text box. /. . . / represents pro-nunciation of Bangla alphabet using Roman alphabet to makecompound words [49], and (. . . ) represents meaning of theBangla compound words in English.Modeling of Bangla language is different from other writ-ten language. Some vowels which are used with consonantsto make Bangla words are changed as opposed to English.For example, the syntax of Bangla word “ ”/aAmAr/ is“ /aA/+ /m/+ /A/+ /r/” which uses four letters in whichthe 4th vowel “ ”/aA/ is replaced with “|”/A/ but the En-glish version of that word is simply “My” Another examplefor the syntax of Bangla word “ ”/Jbr/ is “ /J/+‘ ’/LK/+/b/+ /r/” in which first three letters “ /J/+‘ ’/LK/+ /b/”are replaced with “ ” but in the English version of that wordis “Fever” there is no need to joint two letters. As a singlehand-sign represents multiple characters, so selecting one ofthe characters among multiple characters corresponding thehand-sign is another complex issue. To make the sentencesin Bangla language, there is no predefined traditional syntaxsuch as “Subject+Verb+Object” as like as syntax of Englishlanguage. Some of</s>
<s>the Bangla sentences are made withoutverb such as “ ” /aAmAr nAm rhmAn/ or“ ”/aANggUr Tk/. Most of the cases, Bangla“verb” is placed in the last part of the sentences. Use of dif-ferent punctuation marks is another complex issue. For thisreason, the implementation of BLMA is more challenging butmost essential.In this system, 52 characters are recognized from BdSL(Fig. 1) which are categorized as seven categories listed inAlgorithm 1 Bangla language modeling algorithm (BLMA)Input: κi ∈ (T1 =Table 1), Xm = Position of κi, T2 =Table 2, T3 = Table3, T4 = Table 4, T5 = Table 5;Output: Recognized hand-signs-spelled BdSL.1: Initialization: TempWord = φ; TempChar = φ; S entenceFlag = 0;QuestionFlag = 0; Rule = φ;2: if (κi ∈ {ct1, ct2, ct3, ct5} AND (Xm−1(κi) == κspace OR Xm−1(κi) ==φ)) then3: Print κi;4: TempChar = κi;5: else6: if (κi ∈ T2.input AND Xm−2(κi) ∈ ct3) then7: Print T2.output \\ Table 2: use of vowels after consonants8: TempChar=T2.output9: else if (κi ∈ T3.input AND Rule == T3.rule) then10: Print T3.output \\ finding ‘hidden character’ from ‘recognizedcharacter’ using Table 311: TempChar = T3.output12: else13: Print κi \\ If no rules are matched then print all κi includingBangla numerals14: TempChar = κi15: end if16: end if17: if (TempChar � κspace AND TempChar � κpunctuation) then18: TempWord = TempWord∪TempChar \\ for Bangla word form-ing19: else20: if (TempWord ∈ T4.input) then21: TempWord = T4.output22: end if23: Print TempWord24: if (S entenceFlag == 0 AND TempWord ∈ T5) then25: QuestionFlag = 1 \\ is the first word of the sentence QWod?26: else27: QuestionFlag = 028: end if29: if (TempChar == κpunctuation then30: if (TempWord ∈ T5 OR QuestionFlag = 1) then31: TempChar = κpunctuation =‘?’ \\ κpunctuation is changed as ‘?’32: end if33: Print TempChar34: S entenceFlag = 0 \\ starting new sentence35: else36: QuestionFlag = 1 \\ in between sentence37: end if38: end ifTable 1. But only 36 hand-signs are used from 51 writtencharacters based on pronunciation [5]. To make completeBangla words, composite numerals and sentences, we needto discover another (51-36=15) fifteen “hidden characters”which are 6 hidden vowels { /ii/, /uu/, /ei/, /oi/, /ri/,/li/} and 9 hidden consonants { /Ng/, /iy/, /n/, /j/,8 Front. Comput. Sci., 2020, 14(3): 143302/Shh/, /SHh/, /Tt/, /CB/}. We ignore the “hidden char-acters” { /li/ and /CB/} which are not used available nowin Bangla language [50]. Bangla language uses ten depen-dent vowels known as “kar” (listed as output of Table 2),(jfola) /j/, (rfola) /RR/, (ref) /rr/ and about 285 joint char-acters [51, 52] to make words also defined as “hidden char-acters”. To discover the all of “hidden characters” for mak-ing Bangla words and sentences by using Algorithm 1, wehave modeled the Bangla language as shown in Fig. 5 whichconsists of 17 sub-models. We have also listed all alphabets,vowels, all syntax and rules, example of correct words andsentences of Bangla language into different tables representedas Tables 1–5 respectively to implement the Algorithm 1.The syntax of a Bangla word is defined by κspace . . . κspaceor κspace . . . κpunctuation and</s>
<s>the syntax of a Bangla sentenceis defined by κpunctuation . . . κpunctuation or κspace . . . κpunctuation.Punctuation (κpunctuation) may be ‘|’/./, ‘,’/,/(comma),‘;’/;/(semicolon), ‘!’/!/(Exclamation mark), ‘?’/?/(Ques-tion Mark) etc. [52]. In this paper, we have consider onlytwo punctuation marks ‘|’./(Full stop) and ‘?’/?/(QuestionFig. 5 Bangla language modeling for discovering “hidden characters” from “recognized characters”Muhammad Aminur RAHAMAN et al. Bangla language modeling algorithm (BLMA) 9Table 1 List of Bangla ‘recognized characters’ from corresponding BdSLCategories Characters from BdSLR BdSLct1 [ /a/] of Fig. 1(a)ct21 [ /aA/, /e/, /u/, /e/, /o/] [ - ] of Fig. 1(a)ct32 [ /k/, /Khh/, /g/, /Ghh/, /c/, /Chh/, /J/, /Jhh/, /T/, /THh/, /D/, /DHh/,/t/, /Thh/, /d/, /Dhh/, /N/, /p/, /Phh/, /b/, /Bhh/, /m/, /y/, /r/, /l/, /s/, /h/, /Rhh/][ - ] of Fig. 1(b)ct4 [ /ng/, //Nh] [ , ] of Fig. 1(b)ct5 [ /1/, /2/, /3/, /4/, /5/, /6/, /7/, /8/, /9/] [ - ] of Fig. 1(c)ct6 [◦/z/, ◦ ◦ ◦/3z/, ◦ ◦ ◦ ◦ ◦/5z/, ◦ ◦ ◦ ◦ ◦ ◦ ◦/7z/]◦ of Fig. 1(c) and[S1-S3] of Fig. 1(d)ct7 [κspace = ‘ ′//, κlink = /LK/, κpunctuation = | /./3] [S4-S6] of Fig. 1(d)1 here we have categorized only those vowels which are originated from BdSLR. The complete list of dependent and in dependent vowels (except /a/ whichhas not dependent vowel form) presented in Table 22 here we have categorized only those consonants which are originated from BdSLR. Another consonants which are generated from Algorithm 1 are catego-rized as ct3κ=[ /Ng/, /iy/, /n/, /j/, /Shh/, /SHh/, /RHh/, /Tt/, (jfola) /j/, (rfola) /RR/, (ref) /rr/] according to Fig. 53 κpunctuation may be |/./ or ?/?/ depending on the syntax of Bangla sentencesTable 2 List of Bangla vowels with corresponding dependent vowels as output depending on BLMR for vowelsIndex Input Output Placement Index Input Output Placement1 /a/ /A/ right of ct3 6 Table 3.output.rule1 ( /ei/) /EI/ left of ct32 /i/ /I/ left of ct3 7 Table 3.output.rule2 ( /ou/) /OU/ around the ct33 /u/ /U/ base of ct3 8 Table 3.output.rule3 ( /ri/) /RI/ base of ct34 /e/ /E/ left of ct3 9 /ii/ (Table 4.output for /i/) /II/ right of ct35 /o/ /O/ around the ct3 10 /uu/ (Table 4.output for /u/) /UU/ base of ct3Table 3.output.rulei indicates the output of Table 3 for corresponding ith ruleTable 4.output indicates the output of Table 4 depending on the corresponding correct wordsTable 3 List of Bangla language modeling rules (BLMR) to implement BLMAIndex Rule Output1 if (κi== /i/ AND Xm−2(κi) == /e/ AND Xm−1(κi)==κlink) /ei/12 if (κi == /u/ AND Xm−2(κi) == /o/ AND Xm−1(κi) == κlink) /ou/23 if (κi == /i/ AND Xm−2(κi) == /r/ AND Xm−1(κi) == κlink) /ri/34 if (κi ∈ { /i/, /u/, /o/} AND Xm−2(κi) ∈ ct3 AND Xm−1(κi) == κlink ) κi5 if ( κi ∈ ct3 AND Xm−2(κi) == /r/ AND Xm−1(κi) == κlink) (ref)/rr/56 if (κi == /r/ AND Xm−2(κi) ∈ ct3 AND Xm−1(κi) == κlink) (rfola)/RR/67 if (κi == /y/ AND Xm−2(κi) == /i/ AND Xm−1(κi) ==</s>
<s>κlink) /iy/78 if (κi == /s/ AND Rule == RS H 8 /SHh/if (κi == /s/ AND TempWord ∈ { /pUr/, /BhhA/, /tIr/} AND Xm+1(κi) == κlink AND Xm+2(κi) ∈{ /k/, /Khh/, /p/, /Phh/})/s/10 if (κi == /s/ AND Xm−1(κi) ∈ ct5) /Shh/11 if (κi == /N/ AND Rule == RN 9 /N/12 if (κi == /N/ AND Rule � RN) /n/13 if (κi == /J/ AND Xm−2(κi) ∈ {ct3, JC, /n/, /j/} AND Xm−1(κi) == κlink) (jfola)/j/1014 if (κi == /ng/ AND Xm+1(κi) == κlink AND Xm+1(κi) ∈{ /k/, /Khh/, /g/, /Ghh/, /m/}) /Ng/15 if(κi ∈ ct3 AND Xm−1(κi) == κlink AND Xm−2(κi) ∈ {ct3, ct3x11 , JC}) JC121Delete (κi, Xm−1(κi), Xm−2(κi)) AND κi = /ei/2 Delete (κi, Xm−1(κi), Xm−2(κi)) AND κi = /ou/3 Delete (κi, Xm−1(κi), Xm−2(κi)) AND κi = /ri/4 Delete (Xm−1(κi)) AND κi = κi5 Delete (Xm−1(κi), Xm−2(κi)) AND Link κi = (ref)/rr/ on the top of κi ∈ ct36 Delete (κi = /r/, Xm−1(κi)) AND Link κi = (rfola)/RR/ to the base of (Xm−2(κi) ∈ ct3)7 Delete (κi, Xm−1(κi), Xm−2(κi)) AND κi = /iy/8 RS Hh = Rules of using /SHh/( /SHhtb bIDhhAn/) [52]; Example:(i) if (κi == /s/ AND Xm−1(κi) == /ri/) then output= /SHh/; Example:(ii) if(κi == /s/ AND TempWord ∈ { /atI/, /aBhhI/, /anU/, /sU/} then output= /SHh/; Example: (iii) if (κi == /s/ AND TempWord ∈ { /nI/,/dU/, /bhI/, /aAbI/, /ctU/, /pRRAdu/} AND Xm+1(κi) == κlink AND Xm+2(κi) ∈ { /k/, /Khh/, /p/, /Phh/}) then output= /SHh/)9 RN = Rules of using /N/ ( /Ntb bIDhhAn/) [52]; Example: if (κi == /N/ AND Xm−1(κi) == { /ri/, /r/, /SHh/}) then output= /N/)10 Delete (κi, Xm−1(κi)) AND Link κi = (jfola)/j/ to the right of Xm−2(κi)11 ct3x ={ /Ng/, /iy/, /n/, /Shh/, /SHh/}∈ ct3κ12 JC =Joint Character= Delete (κi, Xm−1(κi), Xm−2(κi)) AND Link (Xm−2(κi), Xm−1(κi)) [51]10 Front. Comput. Sci., 2020, 14(3): 143302Table 4 Lookup table for sample incorrect Bangla words and corresponding correct wordsIndex Input (incorrect word) Output (correct word) Index Input (incorrect word) Output (correct word)1 /id/ /iid/ 30 /DhhrEN/ /DhhrEn/2 /igl/ /iigl/ 31 /JAbEN/ /jAbEn/3 /irrSHhA/ /iirrSHhA/ 32 /dUrrNAm/ /dUrrnAm/4 /isbrI/ /iiShhbrII/ 33 /bRRAkSHhmN/ /bRRAkSHhmn/5 /pkSHhI/ /pkSHhII/ 34 /JInISHh/ /JInIs/6 /stRRI/ /stRRII/ 35 /SHhORhhs/ /SHhORhhShh/7 /bINA/ /bIINA/ 36 /gRRISHh/ /gRRIs/8 /urrDhhb/ /uurrDhhb/ 37 /mISHhr/ /mIShhr/9 /uSHhA/ /uuSHhA/ 38 /pUlIs/ /pUlIShh/10 /uhj/ /uuhj/ 39 /sTEsn/ /sTEShhn/11 /mUk/ /mUUk/ 40 /sIr/ /ShhIr/12 /mUrrKhh/ /mUUrrKhh/ 41 /sAk/ /ShhAk/13 /mUmUrrSHh/ /mUmUUrrSHh/ 42 /sAsn/ /ShhAsn/14 /bImURhh/ / bImUURHh/ 43 /sIt/ /ShhIIt/15 /dRIRhh/ /dRIRHh/ 44 /srIk/ /ShhrIIk/16 /dRIRhhtA/ /dRIRHhtA/ 45 /bIsEs/ /bIShhESHh/17 /gARhh/ /gARHh/ 46 /srIr/ /ShhrIIr/18 /rURhhI/ /rUURHhI/ 47 /uJjl/ /uJJl/19 /rURhh/ /rUURHh/ 48 /pRRtIJOgI/ /pRRtIjOgII/20 /rARhh/ /rARHh/ 49 /JUdDhh/ /jUdDhh/21 /aASHhARhh/ / aASHhARHh/ 50 /JdI/ /jdi/22 /hTHhAt/ / hTHhATt/ 51 /Jm/ /jm/23 /iSHht/ /iSHhTt/ 52 /JKhhn/ /jKhhn/24 /utsAh/ /uTtsAh/ 53 /JAbJjIbn/ /jAbJJIIbn/25 /utsb/ /uTtsb/ 54 /JAtnA/ /jAtnA/26 /tRhhIt/ /tRhhITt/ 55 /Js/ /jShh/27 /ttkAlIn/ /tTtkAlIIn/ 56 /JmJ/ /jmJ/28 /ttpr/ /tTtpr/ 57 /JAbt/ /jAbTt/29 /ttsm/ /tTtsm/ 58 /JUg/ /jUg/Table 5 List of question-word used in interrogative sentence/kI/ (what), /kOThhy/ (where), /kKhhn/(when), /kKhhOn/(when), /kE/(who), /kEmn/(how), /kOnTI/(which),/kyTI/(how many).</s>
<s>. .Mark) from a single sign S6 for simplicity. If a sentence con-tains any Question Word (QWord) presented in Table 5 thenthe sentence is treated as interrogative and the κpunctuation isset to ‘?’/?/(Question Mark); otherwise κpunctuation is set to‘|’/./(Full stop). A single word may represent a sentence inBangla language such as /KhhAb./(I shall eat.) or?/kE?/(Who?) etc.All syntax and rules for the BLMA are listed in Tables 2and 3 which are generated from [51–54] as input according tothe BLM as shown in Fig. 5. After applying this Bangla lanu-age modeling rules (BLMR), the system may generate someincorrect words (where BLMR is not applicable) which willbe corrected according to the lookup Table 4. Here, the spe-cial signs S1, S2 and S3 of ct6 represent “000”, “10000” and“1000000” respectively [55] which are used to make compos-ite numeral signs as shown in Figs. 6(c)–6(f). We have cate-gorized the three special signs S4, S5 and S6 of ct7 definedin Table 1. Example of hand-sign-spelled words, compositenumerals and sentences in BdSL are shown in Fig. 6.4 Experimental result and discussion4.1 Experimental setupThe proposed system uses a built-in webcam (USB 2.0UVC HD Webcam) of ASUS ZenBook UX305CA se-ries for capturing the image sequence. The system usesan ASUS ZenBook UX305CA with Intel Core m7 (In-tel(R) Core(TM) m7-6y75 CPU 1.20GHz 1.51GHz) pro-cessor and 8GB RAM. The system uses EmguCV (C# ofMicrosoft R©Visual Studio R©2008 and OpenCV wrapper) [56]in 64-bit operating system of MS Windwos10 R©.Muhammad Aminur RAHAMAN et al. Bangla language modeling algorithm (BLMA) 11Fig. 6 Example of hand-sign-spelled word, composite numeral and sen-tence in BdSLIn this experiment, the proposed system is trained using5,200 images for 52 hand-signs. 100 images are captured foreach hand-sign from 10 different signers to train the systemfor hand-signs classification phase. The system asks 10 dif-ferent skin-colored signers to perform those signs, where fourare female and six are male. 10 images of each hand-sign arecaptured from each signer for training the system by gener-ating the NOBV (Γ ji ) and WGV (Ωli). Figure 1 presents theexample set of 52 hand-signs training dataset. Note that thetraining data of the proposed system (first phase and secondphase as shown in Fig. 2) consists of only the hand-signs ofsingle characters (not words, composite numerals and sen-tences).For testing the system, six sets of images have been usedfor 52 hand-signs in BdSL. The total number of images havebeen used by the system for testing are (5200 × 6) = 31200.For each set of images, 10 new signers who did not take partin training phase, participate to perform hand-signs, where10 × 10 = 100 samples for each hand-sign are prepared fortesting in the first phase of the system (hand-signs classifica-tion phase). Test data are prepared in the following six differ-ent environments:• Environment-1(E1): plain background with properlighting;• Environment-2(E2): illumination variation environ-ment with plain background;• Environment-3(E3): cluttered and static backgroundwhere skin-color static objects are present;• Environment-4(E4): illumination variation environ-ment with cluttered background;• Environment-5(E5): cluttered and dynamic backgroundwhere other persons are moving behind the signer; and• Environment-6(E6): cluttered and dynamic backgroundwith illumination variation environment where</s>
<s>otherpersons are moving behind the signer.The second phase of the proposed system is tested using500 video clips of 500 hand-sign-spelled words, 100 videoclips of 100 hand-sign-spelled composite numerals and 80video clips of 80 hand-sign-spelled sentences in BdSL fromthe 10 signers. Each video clip contains 10 of each hand-sign-spelled BdSL (words, composite numerals and sentences)from 10 signers. For preparing the sets of testing video clipsfor the Second Phase, the signs are generated randomly indifferent environment with different background from the 10different signers. We allow other moving objects or personsbehind the signer only who performs the hand-signs.For the system training and testing, we plot the accuracyversus CC graphs (as shown in Fig. 7) to fine-tune the valuesof KΓ of NOBV (Γ ji ); the normalizing size of the clipping bi-nary images Imnorm(x, y); the size of each WGM for WGV (Ω ji )generation; and the different KΩ values of 1st step classifierusing the selected off-line samples of hand-signs. From theobservations of the graphs as shown in Fig. 7, we decide thatthe KΓ = 50, Imnorm(x, y) = 150 × 150,WGM = 5 × 5, andKΩ = 3 are set for achieving the high accuracy with reducedCC.The system uses accuracy and computational cost for per-formance measurement. Accuracy is calculated using Eq. (8)[57] and tabulated in Table 6. The Computational Cost (CC)is calculated by measuring the time to capture an image, de-tect and segment hand-signs from captured image, extractfeatures and match it with the training feature vectors in mil-liseconds per frame (ms/f).accuracy =TP + TNTP + TN + FP + FN× 100, (8)where the values of TP (true positive), FP (false positive),FN (false negative), and TN (true negative) are generatedfrom separate confusion matrices for six environments (E1,E2, E3, E4, E5 and E6). The confusion matrices are notshown here for the save of simplicity and space.4.2 Result of hand-signs classification (first phase)The result of hand-sign detection and skin-color segmenta-tion are not present here due to these being the contributionsof our previous system [7]. Here, we have presented the ex-perimental result of hand-signs classification using our pro-posed two-step classifier.Table 6 presents the summarized results of 52 hand-signsof BdSL alphabet, numerals and special signs recognitionconsidering six environments (E1, E2, E3, E4, E5 and E6).From the test results shown in Table 6, it is evident thatthe 52 hand-signs are recognized and distinguished properlyin cluttered and dynamic backgroundwith illumination varia-12 Front. Comput. Sci., 2020, 14(3): 143302Fig. 7 The graphs of accuracy verses CC for (a) different KΓ value of normalized outer boundary vector (Γ ji ), (b) different normalizing sizesof Imnorm(x, y), (c) different sizes of each WGM, and (d) different KΩ values of 1st step classifiertion environment achieving the mean accuracy of 97.12%for Environment-1, 96.73% for Environment-2, 96.21%for Environment-3, 95.83% for Environment-4, 95.17% forEnvironment-5, 93.94% for Environment-6 and 95.83% foroverall system with the computational cost of 39.97 ms/f. Foreach environment, the performance of the recognition rateof hand-signs “ ” and “ ” are decreased to less than about92%. Because, the skin color image of hand-signs “ ” and“</s>
<s>” are distinguishable but the binary hand-signs of themare very similar to each other as shown in Fig. 8. In severalenvironments, the performances may decrease due to wrongperforming of the hand-signs in front of fixed camera. In par-ticular, performing the hand-signs “ ”, “ ”, “ ” and “ ” ismost difficult in front of fixed camera. For each environment,the performance of one-handed numeral signs and specialsigns recognition in BdSL is higher than the performance oftwo-handed BdSL alphabet signs recognition. However, thetest results are satisfactory for all environments, in whichEnvironment-1 (E1) is the best case and Environment-6 (E6)is the worst case. Our experimental results show that theproposed system recognizes hand-signs under various illumi-nation and backgrounds successfully. Signer dependency istackled with NOBV and WGV feature vectors. Signer inde-pendence is proved by the testing result of the proposed sys-Muhammad Aminur RAHAMAN et al. Bangla language modeling algorithm (BLMA) 13Table 6 Results of BdSL Hand-signs classification accuracy and meancomputational costs in six different environmentsHand- Accuracy /% Computationalsigns E1 E2 E3 E4 E5 E6 mean cost (ms/f)/a/ 99 98 98 97 95 93 96.67 39.972/aA/ 100 100 98 98 97 95 98 38.873/i/ 98 97 97 97 95 94 96.33 39.953/u/ 96 97 97 97 96 95 96.33 39.995/a/ 97 97 96 97 97 95 96.5 39.985/o/ 96 97 95 96 95 94 95.5 40.882/k/ 96 96 96 96 96 94 95.67 40.775/Khh/ 96 95 95 95 95 93 94.83 40.885/g/ 100 99 97 97 98 96 97.83 39.775/Ghh/ 97 96 95 95 94 94 95.17 40.985/c/ 95 95 95 94 95 93 94.5 39.034/Chh/ 95 95 94 94 94 93 94.17 40.755/J/ 95 95 95 95 93 93 94.33 40.755/Jhh/ 95 95 94 94 93 93 94 40.789/T/ 100 100 98 97 96 95 97.67 38.887/THh/ 100 99 98 98 97 95 97.83 39.342/D/ 97 96 96 95 95 94 95.5 39.458/DHh/ 95 94 94 94 94 92 93.83 39.459/t/ 95 95 95 95 95 94 94.83 40.257/Thh/ 96 96 95 95 94 93 94.83 40.125/d/ 95 95 95 95 94 93 94.5 40.155/Dhh/ 95 95 95 95 94 92 94.33 39.348/N/ 94 94 94 94 94 92 93.67 39.345/p/ 95 95 94 94 95 92 94.17 41.312/Phh/ 95 95 94 94 93 93 94 40.255/b/ 95 94 94 94 94 93 94 40.245/Bhh/ 98 97 96 95 95 93 95.67 39.988/m/ 96 96 96 95 94 93 95 38.985/y/ 96 96 95 95 94 91 94.5 38.987/r/ 92 91 92 91 89 87 90.33 39.985/l/ 92 92 92 91 91 89 91.17 39.907/s/ 97 95 95 94 93 92 94.33 40.415/h/ 98 97 96 96 95 93 95.83 40.235/Rhh/ 97 96 96 95 95 95 95.67 40.125/ng/ 98 97 97 96 95 93 96 38.998/Nh/ 97 97 96 96 95 92 95.5 39.985/0/ 100 100 99 99 98 97 98.83 39.972/1/ 100 100 100 99 98 97 99 40.565/2/ 100 100 98 98 97 96 98.17 39.987/3/ 100 100 99 98 98 98 98.83 40.997/4/ 100 99 100 98 97 97 98.5</s>
<s>38.987/5/ 100 99 99 99 97 97 98.5 39.895/6/ 99 99 99 98 97 97 98.17 39.887/7/ 99 98 99 98 96 95 97.5 40.128/8/ 99 99 98 98 97 95 97.67 40.654/9/ 98 97 98 96 96 95 96.67 41.354S1 98 97 96 95 95 94 95.83 38.447S2 98 98 96 96 95 94 96.17 38.456S3 97 97 96 96 96 95 96.17 39.734S4 97 97 96 96 95 95 96 40.231S5 98 98 97 96 96 95 96.67 41.173S6 99 98 98 97 96 95 97.17 38.842mean: 97.12 96.73 96.21 95.83 95.17 93.94 95.83 39.972tem with 10 different signers who did not take part in training.Before applying the proposed two-steps classification tech-nique, we have tested the hand-signs classification using fourcases: Case-1, Case-2, Case-3 and Case-4. We have plottedthe test results (accuracy versus computational cost (CC)) ofthe four cases in the observation graph as shown in Fig. 9using the selected off-line samples of hand-signs to provethat why our proposed two step classification technique isbest case. In Case-1, Hand-signs are classified based on onlyNOBV (Γ ji ) with least accuracy (94.88%) but the highest clas-sification speed. In Case2, hand-signs are classified based ononly WGV (Ω ji ) with lowest accuracy (94.78%) and moreCC than Case-1. In Case-3, hand-signs are classified basedNOBV (Γ ji ) and WGV (Ω ji ) as a single feature vector de-fined as ξ ji = [Γ ji ,Ωi ] by a single step classification tech-nique which obtains highest accuracy (96.08%) but the CCis increased significantly. In Case-4, hand-signs are classifiedby the proposed two-steps classification technique based onNOBV and WGV respectively. If the classification score ofthe 1st step classifier based on NOBV does not satisfy thespecific condition then the system calls the 2nd step classi-fier based on WGV to classify the hand-signs. By applyingthe Case-4, the system obtains high accuracy (96.06%) aboutsame as Case-3 but the CC is minimized significantly than theCase-3. From Fig. 9, we conclude that the proposed two-stepsclassification technique (Case-4) is the best case.The previous system [27] process i × j × M × N = 52 ×100 × 150 × 150 = 117, 000, 000 pixel vector of 5,200 hand-signs to recognize a single hand-sign but the proposed sys-tem needs to match with maximum [( j × i) × (KΓ + (MΩ ×KΩ))] = [(100 × 52) × (50 + (25 × 3))] = 650, 000 vectorelements of 5,200 hand-signs, even if the selected KΩ-signclasses are same then the system needs to match with only[(100× 52)× (50+ (25× 0))] = 260, 000 vector elements andalso two classes are same but one class is different amongthe selected KΩ-sign classes by the 1st classifier, the systemneeds to match with [(100× 52)× (50+ (25× 2))] = 520, 000vector elements of 5,200 hand-signs which reduces the CC.4.3 Comparative analysis of different system for hand-signsclassificationThe comparative analysis of the test results of the proposedsystem (BdSLR) with existing reputed BdSL classificationsystems developed by Rahaman et al. [8] using contourmatching (CM) algorithm (denoted by “CM”); Rahaman etal. [7] using WGV</s>
<s>(denoted by “HSSCS”); Jasim et al. [4]using Haar like feature based cascaded classifier and KNN14 Front. Comput. Sci., 2020, 14(3): 143302Fig. 8 Similarity of binary hand-signs “ ” with “ ”. (a) Distinguishable skin color images of “ ” and “ ”; (b) indistinguishable binary handpostures of “ ” and “ ”Fig. 9 Performance analysis graph (accuracy vs computational cost) ofhand-signs classification in several four casesClassifier (denoted by “Haar-KNN”), Xu [42] using convolu-tional neural network (CNN) classifier (denoted by “CNN”);Rahaman et al. [27] using KNN (denoted by “KNN”) andKarmokar et al. [31] using Neural network ensemble (NNE)(denoted by “NNE”) is shown in Fig. 10. We have testedthese systems (not only classifiers but also whole system in-cluding hand-signs detection and segmentation, feature ex-traction, training and classification) using the same datasetof our proposed system (BdSLR). We have calculated themean accuracies and CCs from the several confusion ma-trices for the several systems. The system’s mean accuracyverses CC are plotted in the graph as shown in Fig. 10 wheredifferent points indicate the overall performance of differentsystems which represents that the proposed system (BdSLR)is approximately two times faster than the previous systems“Haar-KNN” [4], “CNN” [42] and “KNN” [27] and aboutthree times faster than the “NNE” [31] while maintaininghigher accuracy. The proposed system is more than five timesslower than the previous system “CM” [8] and “HSSCS” [7]but the accuracies are increased significantly than the sys-tem “CM” in different challenging environments (as shownin Fig. 11) and also archives higher accuracy than “HSSCS”.We have compromised computational cost but not accuracyfor maintaining the robustness of the system. From Fig. 10,we claim that the test results of the proposed system (Bd-SLR) show better performance than existing reputed hand-sign classification systems.Fig. 10 Comparative analysis of different systems4.4 Result of hand-sign-spelled BdSL recognition usingBLMA (second phase)Table 7 shows the accuracy of BLMA for discovering “hid-den characters” based on corresponding “recognized charac-ters” from BdSL. Here, we have used 100 of occurrencesof each “recognized characters” in different words to dis-cover corresponding “hidden characters” using selected of-fline samples. In the experiment, the system achieves the ac-curacy of 100% for most of the rules except discovering the“hidden characters” “ /n/, /SHH/, /SHh/, and /j/” be-cause of exceptionality of Bangla words where the BLMRs ofBLMA are not applicable and absence of alternative correctwords in the lookup Table 4. From the test results shown inTable 7, it is evident that the proposed BLMA works properlyby discovering all “hidden characters” based on “recognizedcharacters” from hand-sign-spelled BdSL.Muhammad Aminur RAHAMAN et al. Bangla language modeling algorithm (BLMA) 15Table 7 The accuracy of BLMA for discovering “hidden characters” basedon corresponding “recognized characters”HiddencharactersRecognized charactersNumber ofoccarenceAccuracy/A/ /a/ 100 100/I/ /i/ 100 100/U/ /u/ 100 100/E/ /e/ 100 100/O/ /o/ 100 100/ii/ /i/ 100 100/II/ /i/ 100 100/uu/ /u/ 100 100/UU/ /u/ 100 100/ei/ /eLKi/ 100 100/EI/ ct3+ /. . . eLKi/ 100 100/ou/ /oLKu/ 100 100/OU/ ct3+ /. . . oLKu/ 100 100/ri/ /rLKi/ 100 100/RI/ ct3+ /. . . rLKi/ 100 100/RR/ ct3+ /. . . Lkr/ 100 100/rr/ ct3/rLK. . . /</s>
<s>100 100/iy/ /iLKy/ 100 100JC Char1+LK+Char2 100 100/Ng/ /ng/ 100 100/j/ /J/ 100 96/j/ /J/ 100 100/Tt/ /t/ 100 100/n/ /N/ 100 95/Shh/ /s/ 100 95/SHH/ /s/ 100 98/RHh/ /Rhh/ 100 100?/?/ |/./ 100 100In the experiment, we have recorded (10 × 1) = 10 videoclip observations from 10 signers on each hand-sign-spelledBdSL (word, composite numeral and sentence) using the 52hand-signs for selected 500 words, 100 composite numeralsand 80 sentences in BdSL. The system achieves the meanaccuracy of 93.50% for hand-sign-spelled words, 95.50%for composite numerals and 90.50% for sentences recogni-tion. Tables 8–10 present the sample results of hand-sign-spelled words, composite numerals and sentences recogni-tion in BdSL respectively. From the test results shown in Ta-bles 8–10, it is evident that the proposed BLMA works prop-erly by discovering “hidden characters” based on “recognizedcharacters” from 52 hand-signs and making Bangla words,composite numerals and sentences. In this case, if one of thehand-sign of the words or composite numerals or sentences isclassified wrongly in spelling then the whole words or com-posite numerals or sentences are recognized as wrong whichaffects the overall performance of the system. Higher classi-fication rate of basic numeral (0–9) signs in BdSL increasesthe recognition rate of hand-sign-spelled composite numer-als recognition rate as shown in Table 9. Figure 11 showsthe example snapshots of successful hand-signs classificationand automatic recognition of hand-sign-spelled words, com-posite numerals and sentences recognition outputs in severalenvironments.Table 8 Example resultsr1 of accuracy for automatic recognition of hand-sign-spelled words in BdSLBangla words Hand-sign spellingAccuracy/iid/(Eid) 95/mUUk/(Dumb) 93/ShhIIt/(Winter) 89/sNgGhh/(Organization) S5+ 93/mRItjU/(Death) S5+ S5+ S5+ 87/hTHhaTt/(Sudden) 95/rAJj/(State) S5+ 88/DAktAr/(Doctor) S5+ 87/Jbr/(Fever) +S5+ 87/dUNhKhhIt/(Sorry) 89/BhhAlO/(Good) 91/bAThhrUm/(Toilet) 88/bAsA/(House) 89/mA/(Mother) 92/bAbA/(Father) 90/kSHhT/(Trouble) S5+ 88/GhhUm/(Sleep) 97/KhhAbAr/(Food) 80/KhhUShhI/(Happy) 94/BhhAt/(Rice) 93/KhhAb/(Eat) 87/aAgAmI kAl/ S4+ 87(Tomorrow)/gt kAl/(Yesterday) S4+ 87/3 TAkA/(3 Taka) S4+ 97/mAThhA/(Head) 89/dDhhI/(Ke r) 91/aAm/(Mango) 97r1 only 27 sample results are presented among 500 wordsTable 9 Example resultsr2 of accuracy for automatic recognition of hand-sign-spelled composite numerals in BdSLBangla composite numerals Hand-sign spelling Accuracy/%(100) 92(1000) S1 97(100000) S2 97(90000000) S3 96(101) 97(547) 96(900003) S1+ 94r2 only 7 sample results are presented among 100By using AKF to track the ROI and using the proposedFRB-RGB model to segment the skin-color and the binary16 Front. Comput. Sci., 2020, 14(3): 143302Fig. 11 Example snapshots of the output of the proposed system in different challenging environments. (a) Example hand-signs recognitionwith 10 different signers for 10 different signs; (b) example hand-signs-spelling recognition (word and/or sentence macking) with partialocclusion behind another persons are moving; (c) example hand-signs-spelled words recognition with illumination variation in outdoor scene;(d) example hand-signs-spelled sentence recognition with cluttered backgroundhand-sign extraction from the ROI based on segmentedskin-color pixels with specific motion, the proposed systemachieves the ability to recognize the hand-signs performed incluttered and dynamic background where other persons andskin-color objects are moving behind. It makes robustness ofthe system possible in real-time. The proposed system workssuccessfully in different challenging environments, such aswhen signs are performed by different signers (Fig. 11(a)), il-lumination is varied (Fig. 11(c)), hand-signs are performed incluttered background or hand-signs overlap with some skin-color region (Fig. 11(d)), and hand-signs are on the face(Fig. 11(a)). The</s>
<s>proposed system can classify hand-signsand recognize hand-sign-spelled words and/or sentences indynamic backgrounds where other persons are moving be-hind (as shown in Fig. 11(b)).5 ConclusionThis paper presents a Bangla language modeling algorithmfor automatic recognition of hand-sign-spelled Bangla signlanguage that interprets hand-sign-spelled BdSL into Banglawritten words, composite numerals and sentences. In theFirst Phase of the system, proposed two-steps classifiers im-plements hand-signs classification which is tested for 52Bangla hand-signs classification considering four cases: us-ing NOBV based classifier, WGV based classifier, combinedfeature vector using NOBV and WGV based classifier andthen the proposed two step classifiers. For the proposed twostep classifiers, at first the system classifies the hand-signsusing NOBV and if the classification score is unsatisfiedthen the system uses another classifier based on WGV. ForMuhammad Aminur RAHAMAN et al. Bangla language modeling algorithm (BLMA) 17Table 10 Example resultsr3 of accuracy for automatic recognition of hand-sign-spelled sentences in BdSLBangla sentences Hand-sign spelling Accuracy/%/DAktAr DAk./(Please call Doctor.)S5+ S4+ S6 87/aAmAkE 9000 TAkA dAo./(Please give me 9000 taka)S4+ S1+S4+ S4+/Jbr Jbr lAgChhE./(I feel fever.)S5+ S4+ S5+ S4+ S6 87/bAThhrUmE jAb./(I feel to toilet.)S4+ S6 90/mA GhhUm pAcChhE./(Mother, I want to sleep.)S4+ S4+ S5+ S6 87/bAbA GhhUrtE jAb./(Father, I want to go out.)S4+ S4+ S6 90/aAmI BhhAt KhhAb./(I shall eat rice.)S4+ S4+ S6 86/tOmAr nAm kI?/(What is your name?)S4+ S4+ S6 86/kOThhAy jAbE?/(Where shall you go?)S4+ S6 86r3 only 9 sample results are presented among 80 sentencesall classifiers, the hand-signs classification is done based onmaximum Inter Correlation Coefficient (ICC) between testfeature vector and pre-trained feature vectors. For the classi-fier perspective analysis, the system achieved mean accuracyof 94.88% for NOBV based classifier, 94.78% for WGVbased classifier, 96.08% for combined feature vector usingNOBV and WGV based classifier, and 96.06% for proposedtwo step classifiers based on NOBV and WGV with the com-putational cost of 12.007, 71.927, 80.337 and 39.972 ms/frespectively as shown in Fig. 9. The analytical result showsthat the performance of the proposed two step classifier basedon NOBV and WGV is better than other cases. The proposedsystem is tested considering in six different challenging en-vironments (E1, E2, E3, E4, E5 and E6) presented in Table 6achieving mean accuracy of 97.12% for E1, 96.73% for E2,96.21% for E3, 95.83% for E4, 95.17% for E5, 93.94% forE6 and 95.83% for overall system with the computationalcost of 39.97 ms/f. The experimental results prove that thesystem is capable of recognizing any hand-signs of any signlanguage in any environment, if it is trained properly. Theproposed system is faster and simpler along with keepinghigh accuracy than other related systems as shown in Fig. 10.In the Second Phase of the system, proposed BLMA is usedto make Bangla written words, composite numerals and sen-tences by discovering the “hidden characters” based on “rec-ognized characters” from 52 hand-signs. Finally, the sys-tem is tested for classifying hand-sign-spelled of 500 words,100 composite numerals and 80 sentences in BdSL usingBLMA. For this experiment the system achieves mean accu-racy of 93.50% for words, 95.50% for composite numeralsand 90.50% for sentences recognition in BdSL. These experi-mental results prove that the proposed BLMA works properlywith acceptable result.</s>
<s>However the system sometimes failsto distinguish properly two similar binary signs such as “ ”and “ ” in which the color images of them are distinguish-able but the binary images of them are very similar as shownin Fig. 8. The system may fail to segment the hand-signs,if any skin-color objects with similar motion of hand-signsare presented in the ROI. These limitations will be over-come by future development of the system. The BLMA needsto further develop in future work including all punctuationmarks and all kind of joint letter representation to interprethand-sign-spelled words and sentences into Bangla writtenlanguage successfully. However this research will provide astarting point to the researchers into the field of hand-sign-spelled BdSL recognition. The system can be applied as aninterpreter for communication between sign and non-signpeople and it can also be used for human–omputer/machineinteraction or robot control.Acknowledgements This research was partially supported and funded bythe Information and Communication Technology (ICT) Division, Ministryof Posts, Telecommunications and IT, Government of the People’s Republicof Bangladesh.References1. Sutton V. Introduction on deafness, sign language & sign-writing. SeeSignwritingorg Website, 201618 Front. Comput. Sci., 2020, 14(3): 1433022. WFD. Sign language. See World Federation of the Deaf Website, 20163. Dey S. Bangladesh sign language day. See DPI-AP Website, 20144. Jasim M, Zhang T, Hasanuzzaman M. A real-time computer vision-based static and dynamic hand gesture recognition system. Interna-tional Journal on Image and Graphics, 2014, 14(01n02): 1450065. Majumder M, Hossain M, Mamtaz M, Ahmed M, Khan H, Ali M H,Khaled O, Iqbal M. Bengali Sign Language Dictionary. 2nd ed. Dhaka,Bangladesh: National Centre for Special Education and Ministry ofSocial Welfare in Cooperation with The Norwegian Association of TheDeaf and Bangladersh National Federation of The Deaf, 19946. Begum S, Hasanuzzaman M. Computer vision-based bangladeshi signlanguage recognition system. In: Proceedings of the 12th InternationalConference on Computer and Information Technology. 2009, 414–4197. Rahaman M A, Jasim M, Zhang T, Ali M H, Hasanuzzaman M. A real-time hand-signs segmentation and classification system using fuzzyrule based RGB model and grid-pattern analysis. Frontiers of Com-puter Science, 2018, 12(6): 1258–12608. Rahaman M A, Jasim M, Zhang T, Ali M H, Hasanuzzaman M. Real-time bengali and chinese numeral signs recognition using contourmatching. In: Proceedings of the IEEE International Conference onRobotics and Biomimetics. 2015, 1215–12209. Li J, Wang J, Ju Z. A novel hand gesture recognition based on high-level features. International Journal of Humanoid Robotics, 2018,15(2): 175002210. Dong L, Liang Y, Kong G, Zhang Q, Cao X, Izquierdo E. Holons visualrepresentation for image retrieval. IEEE Transactions on Multimedia,2016, 18(4): 714–72511. Dong L, He L, Zhang Q. Discriminative light unsupervised learningnetwork for image representation and classification. In: Proceedings ofthe 23rd ACM International Conference on Multimedia. 2015, 1235–123812. Garcia-Ceja E, Brena R F. An improved three-stage classifier for ac-tivity recognition. International Journal of Pattern and Recognition andArtifical Intelligence, 2018, 32(1): 186000313. Lee G, Mallipecldi R, Lee M. Trajectory-based vehicle tracking at lowframe rates. Expert System with Application, 2017, 80: 46–5714. Mills M T, Bourbakis N G. A comparative survey on NLP/U method-ologies for processing multi-documents. International Journal on Arti-ficial Intelligence Tools,</s>
<s>2012, 21(4): 125001715. Mills M, Psarologou A, Bourbakis N. Modeling natural language sen-tences into SPN graphs. In: Proceedings of the 25th IEEE InternationalConference on Tools with Artificial Intelligence. 2013, 889–89616. Santoni D, Pourabbas E. Automatic detection of words associations intexts based on joint distribution of words occurrences. ComputationalIntelligence, 2016, 32(4): 535–56017. Lee G, Mallipeddi R, Lee M. Trajectory-based vehicle tracking at lowframe rates. Expert Systems with Applications, 2017, 80: 46–5718. Liberman Y, Perry A. A visual tracking scheme for accurate objectretrieval in low frame rate videos. International Journal on ArtificialIntelligence Tools, 2016, 25(5): 164000319. Dong L, Feng N, Mao M, He L, Wang J. E-grabcut: an economicmethod of iterative video object extraction. Frontiers of Computer Sci-ence, 2017, 11(4): 649–66020. Dong L, Feng N, Zhang Q. LSI: semantic label inference for natureimage segmentation. Pattern Recognition, 2016, 59: 282–29121. Li C, Lin L, Zuo W, Wang W, Tang J. An approach to streaming videosegmentation with sub-optimal low-rank decomposition. IEEE Trans-actions on Image Processing, 2016, 25(4): 1947–196022. Chen F S, Fu C M, Huang C L. Hand gesture recognition using a real-time tracking method and hidden markov models. Image Vision Com-puter, 2003, 21(8): 745–75823. Alon J, Athitsos V, Yuan Q, Sclaroff S. A unified framework for ges-ture recognition and spatiotemporal gesture segmentation. IEEE Trans-actions on Pattern Analysis and Machine Intelligence, 2009, 31(9):1685–169924. Asaari M S M, Rosdi B A, Suandi S A. Adaptive kalman filter incorpo-rated eigenhand (akfie) for real-time hand tracking system. MultimediaTools and Applications, 2015, 74(21): 9231–925725. Khaled H, Sayed S G, Saad E S M, Ali H. Hand gesture recognition us-ing modified 1 and background subtraction algorithms. MathematicalProblems in Engineering, 2015, (Article ID: 741068): 1–826. Gurav R M, Kadbe P K. Vision based hand gesture recognition withhaar classifier and adaboost algorithm. International Journal of LatestTrends in Engineering and Technology, 2015, 5(2): 155–16027. Rahaman M A, Jasim M, Ali M H, Hasanuzzaman M. Realtime com-puter vision-based bengali sign language recognition. In: Proceedingsof the 17th International Conference on Computer and InformationTechnology. 2014, 192–19728. Weng S K, Kuo C M, Tu S K. Video object tracking using adaptivekalman filter. Journal of Visual Communication and Image Represen-tation, 2006, 17(6): 1190–120829. Li N, Liu L, Xu D. Corner feature based object tracking using adaptivekalman filter. In: Proceedings of the 9th International Conference onSignal Processing. 2008, 1432–143530. Luo Y, Celenk M. A new adaptive kalman filtering method for block-based motion estimation. In: Proceedings of the 15th InternationalConference on Systems, Signals and Image Processing. 2008, 89–9231. Karmokar B C, Alam K M R, Siddiquee M K. Bangladeshi sign lan-guage recognition employing neural network ensemble. InternationalJournal of Computer Applications, 2012, 58(16): 43–4632. Jarman A M, Arshad S, Alam N, Islam M J. An automated bengalisign language recognition system based on fingertip finder algorithm.International Journal of Electronics and Informatics, 2015, 4(1): 1–1033. Yasir F, Prasad P W C, Alsadoon A, Elchouemi A. Sift based approachon bangla sign language recognition. In: Proceedings of the 8th IEEEInternational Workshop on Computational Intelligence and Applica-tions. 2015, 35–3934. Ayshee T F, Raka S A,</s>
<s>Hasib Q R, Hossain M, Rahman R M. Fuzzyrule-based hand gesture recognition for bengali characters. In: Pro-ceedings of the IEEE International Advance Computing Conference.2014, 484–48935. Wang, Qiao Y, Tang X. Action recognition with trajectorypooled deep-convolutional descriptors. In: Proceedings of the IEEE Conference onComputer Vision and Pattern Recognition. 2015, 4305–431436. Feichtenhofer, Pinz A, Wildes R. Spatiotemporal residual networksfor video action recognition. In: Proceedings of the 30th InternationalConference on Neural Information Processing Systems. 2016, 3476–348437. Asadi-Aghbolaghi M, Clapes A, Bellantonio M, Escalante H J, Ponce-Muhammad Aminur RAHAMAN et al. Bangla language modeling algorithm (BLMA) 19López V, Baró X, Guyon I, Kasaei S, Escalera S. A survey on deeplearning based approaches for action and gesture recognition in imagesequences. In: Proceedings of the 12th IEEE Conference on AutomaticFace and Gesture Recognition (FG 2017). 2017, 476–48338. Liu Z, Zhang C, Tian Y. 3D-based deep convolutional neural networkfor action recognition with depth sequences. Image and Vision Com-puting, 2016, 55(2): 93–10039. Varol G, Laptev I, Schmid C. Long-term temporal convolutions for ac-tion recognition. IEEE Transactions on Pattern Analysis and MachineIntelligence, 2017, 40(6): 1510–151740. Zhu G, Zhang L, Mei L, Shao J, Song J, Shen P. Large-scale isolatedgesture recognition using pyramidal 3D convolutional networks. In:Proceedings of the 23rd International Conference on Pattern Recog-nition. 2016, 19–2441. Wang P, Li W, Liu S, Gao Z, Tang C, Ogunbona P. Largescale iso-lated gesture recognition using convolutional neural networks. In: Pro-ceedings of the 23rd International Conference on Pattern Recognition.2016, 7–1242. Xu P. A real-time hand gesture recognition and humancomputer inter-action system. 2017, arXiv preprint arXiv:1704.0729643. Rahaman M A, Jasim M, Ali M H, Hasanuzzaman M. Computer vi-sion based bengali sign words recognition using contour analysis. In:Proceedings of the 18th International Conference on Computer and In-formation Technology. 2015, 335–34044. Park A, Yun S, Kim J, Min S, Jung K. Real-time visionbased koreanfinger spelling recognition system. International Journal of Computer,Electrical, Automation, Control and Information Engineering, 2008,2(8): 2623–262845. Kane L, Khanna P. A framework for live and cross platform finger-spelling recognition using modified shape matrix variants on depth sil-houettes. Computer Vision and Image Understanding, 2015, 141: 138–15146. Fang G, Gao W, Zhao D. Large-vocabulary continuous sign languagerecognition based on transition-movement models. IEEE Transactionson Systems, Man and Cybernetics, Part A, 2007, 37(1): 1–947. Liwicki S, Everingham M. Automatic recognition of fingerspelledwords in British sign language. In: Proceedings of IEEE Computer So-ciety Conference on Computer Vision and Pattern Recognition Work-shops. 2009, 50–5748. Koller O, Forster V J, Ney V H. Continuous sign language recognition:towards large vocabulary statistical recognition systems handling mul-tiple signers. Computer Vision and Image Understanding, 2015, 141:108–12549. Rameshar S. Sadaran Vhasabiggan and Bangla Vhasa Dhaka,Bangladesh: Ananda Press, 199650. Mukhopaddhay A. Adorsho Bangla banan: ekti prostabona. SeeGalpersamay Website, 201751. Kibria G. Bangla joint letter. See Daffodilvarsity.edu Website, 201352. Chowdhuri M, Chowdhuri M H. Bangla Bhashar Bakoron (Grammarof Bengali language). 2nd ed. Dhaka, Bangladesh: Jatio ShikkhakromoOpatthopustok Board, 201453. Choudhury J. Bangla Banan Abhidhan. 3rd ed. Dhaka, Bangladesh:Bangla Academy, 200854. Ishaaque A. Samakalin Bangla Bhashar Abhidhan, 2nd ed. Dhaka,Bangladesh: Bangla Academy, 200355. CDD. Manual on Sign Supported Bangla.</s>
<s>Dhaka, Bangladesh: Centerfor Disability in Development (CDD), 200256. Sourceforge.net. EmguCV. See Sourceforge Website, 201557. Nagarajan S, Subashini T S. Static hand gesture recognition for signlanguage alphabets using edge oriented histogram and multi classSVM. International Journal of Computer Applications, 2013, 82(4):28–35Muhammad Aminur Rahaman received hisBSc and MSc degree in Computer Science& Engineering from the Department ofComputer Science & Engineering, IslamicUniversity, Bangladesh in 2003 and 2004,respectively. He is a candidate for PhD de-gree under supervision of Prof. Dr. Md.Hasanuzzaman and Prof. Dr. Md. HaiderAli in the Department of Computer Science & Engineering, Univer-sity of Dhaka, Bangladesh. He is a founder Director of Worldgaon(Pvt.) Limited which is a one of the famous software developmentcompany of Bangladesh. His current research interests includingcomputer vision, sign language recognition and human-computerinteraction. He is a member of IEEE.Mahmood Jasim received his BSc and MScdegree in Computer Science & Engineer-ing from the Department of Computer Sci-ence & Engineering, University of Dhaka,Bangladesh in 2011 and 2013, respectively.He joined the Department of Computer sci-ence and engineering as a lecturer in 2014.Currently he is pursuing his PhD degree inthe University of Massachusetts Amherst. His current research inter-ests include human-computer interaction, image processing, com-puter vision and artificial intelligence.Md. Haider Ali received PhD degree fromthe Department of Electronics & Informa-tion Engineering, Toyohashi University ofTechnology, Japan in 2001. Prof. Ali hascompleted his Bachelor and Master degreefrom the Department of Applied Physics &Electronics (presently EEE) University ofDhaka 1984 and 1985 respectively. He isa professor of the Department of Computer Science & Engineer-ing, University of Dhaka since June 2007. His current research in-terests include human face recognition and expression detection,20 Front. Comput. Sci., 2020, 14(3): 143302post surgical expression simulation, soft-tissue deformation mod-eling, polygonal mesh simplification, narrow band video transmis-sion/video conferencing, etc.Md. Hasanuzzaman received PhD degreefrom the Department of Informatics, Na-tional Institute of Informatics (NII), TheGraduate University for Advanced Stud-ies, Japan. He graduated (with Honors)in 1993 from the Department of Ap-plied Physics & Electronics, University ofDhaka, Bangladesh. He completed Mastersof Science (MSc) in Computer Science in 1994 from the Universityof Dhaka. He joined as a Lecturer in the Department of ComputerScience & Engineering, University of Dhaka, Bangladesh in 2000.Since March 2013, he has been serving as a professor in the De-partment of Computer Science & Engineering, University of Dhaka,Bangladesh. His current research interests include human-computerinteraction, image processing, computer vision and artificial intelli-gence. /ASCII85EncodePages false /AllowTransparency false /AutoPositionEPSFiles true /AutoRotatePages /None /Binding /Left /CalGrayProfile (Dot Gain 20%) /CalRGBProfile (sRGB IEC61966-2.1) /CalCMYKProfile (U.S. Web Coated \050SWOP\051 v2) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Error /CompatibilityLevel 1.4 /CompressObjects /Tags /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages true /CreateJobTicket false /DefaultRenderingIntent /Default /DetectBlends true /DetectCurves 0.0000 /ColorConversionStrategy /CMYK /DoThumbnails false /EmbedAllFonts true /EmbedOpenType false /ParseICCProfilesInComments true /EmbedJobOptions true /DSCReportingLevel 0 /EmitDSCWarnings false /EndPage -1 /ImageMemory 1048576 /LockDistillerParams false /MaxSubsetPct 100 /Optimize true /OPM 1 /ParseDSCComments true /ParseDSCCommentsForDocInfo true /PreserveCopyPage true /PreserveDICMYKValues true /PreserveEPSInfo true /PreserveFlatness true /PreserveHalftoneInfo false /PreserveOPIComments true /PreserveOverprintSettings true /StartPage 1 /SubsetFonts true /TransferFunctionInfo /Apply /UCRandBGInfo /Preserve /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true /NeverEmbed [ true /AntiAliasColorImages false /CropColorImages true /ColorImageMinResolution 300</s>
<s>/ColorImageMinResolutionPolicy /OK /DownsampleColorImages true /ColorImageDownsampleType /Bicubic /ColorImageResolution 300 /ColorImageDepth -1 /ColorImageMinDownsampleDepth 1 /ColorImageDownsampleThreshold 1.50000 /EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages true /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] /ColorImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 300 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 1.50000 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] /GrayImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 1200 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 1200 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.50000 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1 /AllowPSXObjects false /CheckCompliance [ /None /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXOutputIntentProfile () /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False /CreateJDFFile false /Description << /ARA <FEFF06270633062A062E062F0645002006470630064700200627064406250639062F0627062F0627062A002006440625064606340627062100200648062B062706260642002000410064006F00620065002000500044004600200645062A064806270641064206290020064406440637062806270639062900200641064A00200627064406450637062706280639002006300627062A0020062F0631062C0627062A002006270644062C0648062F0629002006270644063906270644064A0629061B0020064A06450643064600200641062A062D00200648062B0627062606420020005000440046002006270644064506460634062306290020062806270633062A062E062F062706450020004100630072006F0062006100740020064800410064006F006200650020005200650061006400650072002006250635062F0627063100200035002E0030002006480627064406250635062F062706310627062A0020062706440623062D062F062B002E0635062F0627063100200035002E0030002006480627064406250635062F062706310627062A0020062706440623062D062F062B002E> /BGR <FEFF04180437043f043e043b043704320430043904420435002004420435043704380020043d0430044104420440043e0439043a0438002c00200437043000200434043000200441044a0437043404300432043004420435002000410064006f00620065002000500044004600200434043e043a0443043c0435043d04420438002c0020043c0430043a04410438043c0430043b043d043e0020043f044004380433043e04340435043d04380020043704300020043204380441043e043a043e043a0430044704350441044204320435043d0020043f04350447043004420020043704300020043f044004350434043f0435044704300442043d04300020043f043e04340433043e0442043e0432043a0430002e002000200421044a04370434043004340435043d043804420435002000500044004600200434043e043a0443043c0435043d044204380020043c043e0433043004420020043404300020044104350020043e0442043204300440044f0442002004410020004100630072006f00620061007400200438002000410064006f00620065002000520065006100640065007200200035002e00300020043800200441043b0435043404320430044904380020043204350440044104380438002e> /CHT <FEFF4f7f752890194e9b8a2d7f6e5efa7acb7684002000410064006f006200650020005000440046002065874ef69069752865bc9ad854c18cea76845370524d5370523786557406300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c4f86958b555f5df25efa7acb76840020005000440046002065874ef63002> /CZE <FEFF005400610074006f0020006e006100730074006100760065006e00ed00200070006f0075017e0069006a007400650020006b0020007600790074007600e101590065006e00ed00200064006f006b0075006d0065006e0074016f002000410064006f006200650020005000440046002c0020006b00740065007200e90020007300650020006e0065006a006c00e90070006500200068006f006400ed002000700072006f0020006b00760061006c00690074006e00ed0020007400690073006b00200061002000700072006500700072006500730073002e002000200056007900740076006f01590065006e00e900200064006f006b0075006d0065006e007400790020005000440046002000620075006400650020006d006f017e006e00e90020006f007400650076015900ed007400200076002000700072006f006700720061006d0065006300680020004100630072006f00620061007400200061002000410064006f00620065002000520065006100640065007200200035002e0030002000610020006e006f0076011b006a016100ed00630068002e> /DAN <FEFF004200720075006700200069006e0064007300740069006c006c0069006e006700650072006e0065002000740069006c0020006100740020006f007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400650072002c0020006400650072002000620065006400730074002000650067006e006500720020007300690067002000740069006c002000700072006500700072006500730073002d007500640073006b007200690076006e0069006e00670020006100660020006800f8006a0020006b00760061006c0069007400650074002e0020004400650020006f007000720065007400740065006400650020005000440046002d0064006f006b0075006d0065006e0074006500720020006b0061006e002000e50062006e00650073002000690020004100630072006f00620061007400200065006c006c006500720020004100630072006f006200610074002000520065006100640065007200200035002e00300020006f00670020006e0079006500720065002e> /DEU <FEFF00560065007200770065006e00640065006e0020005300690065002000640069006500730065002000450069006e007300740065006c006c0075006e00670065006e0020007a0075006d002000450072007300740065006c006c0065006e00200076006f006e002000410064006f006200650020005000440046002d0044006f006b0075006d0065006e00740065006e002c00200076006f006e002000640065006e0065006e002000530069006500200068006f006300680077006500720074006900670065002000500072006500700072006500730073002d0044007200750063006b0065002000650072007a0065007500670065006e0020006d00f60063006800740065006e002e002000450072007300740065006c006c007400650020005000440046002d0044006f006b0075006d0065006e007400650020006b00f6006e006e0065006e0020006d006900740020004100630072006f00620061007400200075006e0064002000410064006f00620065002000520065006100640065007200200035002e00300020006f0064006500720020006800f600680065007200200067006500f600660066006e00650074002000770065007200640065006e002e> /ESP <FEFF005500740069006c0069006300650020006500730074006100200063006f006e0066006900670075007200610063006900f3006e0020007000610072006100200063007200650061007200200064006f00630075006d0065006e0074006f00730020005000440046002000640065002000410064006f0062006500200061006400650063007500610064006f00730020007000610072006100200069006d0070007200650073006900f3006e0020007000720065002d0065006400690074006f007200690061006c00200064006500200061006c00740061002000630061006c0069006400610064002e002000530065002000700075006500640065006e00200061006200720069007200200064006f00630075006d0065006e0074006f00730020005000440046002000630072006500610064006f007300200063006f006e0020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e003000200079002000760065007200730069006f006e0065007300200070006f00730074006500720069006f007200650073002e> /ETI <FEFF004b00610073007500740061006700650020006e0065006900640020007300e4007400740065006900640020006b00760061006c006900740065006500740073006500200074007200fc006b006900650065006c007300650020007000720069006e00740069006d0069007300650020006a0061006f006b007300200073006f00620069006c0069006b0065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e00740069006400650020006c006f006f006d006900730065006b0073002e00200020004c006f006f0064007500640020005000440046002d0064006f006b0075006d0065006e00740065002000730061006100740065002000610076006100640061002000700072006f006700720061006d006d006900640065006700610020004100630072006f0062006100740020006e0069006e0067002000410064006f00620065002000520065006100640065007200200035002e00300020006a00610020007500750065006d006100740065002000760065007200730069006f006f006e00690064006500670061002e000d000a> /FRA <FEFF005500740069006c006900730065007a00200063006500730020006f007000740069006f006e00730020006100660069006e00200064006500200063007200e900650072002000640065007300200064006f00630075006d0065006e00740073002000410064006f00620065002000500044004600200070006f0075007200200075006e00650020007100750061006c0069007400e90020006400270069006d007000720065007300730069006f006e00200070007200e9007000720065007300730065002e0020004c0065007300200064006f00630075006d0065006e00740073002000500044004600200063007200e900e90073002000700065007500760065006e0074002000ea0074007200650020006f007500760065007200740073002000640061006e00730020004100630072006f006200610074002c002000610069006e00730069002000710075002700410064006f00620065002000520065006100640065007200200035002e0030002000650074002000760065007200730069006f006e007300200075006c007400e90072006900650075007200650073002e> /GRE <FEFF03a703c103b703c303b903bc03bf03c003bf03b903ae03c303c403b5002003b103c503c403ad03c2002003c403b903c2002003c103c503b803bc03af03c303b503b903c2002003b303b903b1002003bd03b1002003b403b703bc03b903bf03c503c103b303ae03c303b503c403b5002003ad03b303b303c103b103c603b1002000410064006f006200650020005000440046002003c003bf03c5002003b503af03bd03b103b9002003ba03b103c42019002003b503be03bf03c703ae03bd002003ba03b103c403ac03bb03bb03b703bb03b1002003b303b903b1002003c003c103bf002d03b503ba03c403c503c003c903c403b903ba03ad03c2002003b503c103b303b103c303af03b503c2002003c503c803b703bb03ae03c2002003c003bf03b903cc03c403b703c403b103c2002e0020002003a403b10020005000440046002003ad03b303b303c103b103c603b1002003c003bf03c5002003ad03c703b503c403b5002003b403b703bc03b903bf03c503c103b303ae03c303b503b9002003bc03c003bf03c103bf03cd03bd002003bd03b1002003b103bd03bf03b903c703c403bf03cd03bd002003bc03b5002003c403bf0020004100630072006f006200610074002c002003c403bf002000410064006f00620065002000520065006100640065007200200035002e0030002003ba03b103b9002003bc03b503c403b103b303b503bd03ad03c303c403b503c103b503c2002003b503ba03b403cc03c303b503b903c2002e> /HEB <FEFF05D405E905EA05DE05E905D5002005D105D405D205D305E805D505EA002005D005DC05D4002005DB05D305D9002005DC05D905E605D505E8002005DE05E105DE05DB05D9002000410064006F006200650020005000440046002005D405DE05D505EA05D005DE05D905DD002005DC05D405D305E405E105EA002005E705D305DD002D05D305E405D505E1002005D005D905DB05D505EA05D905EA002E002005DE05E105DE05DB05D90020005000440046002005E905E005D505E605E805D5002005E005D905EA05E005D905DD002005DC05E405EA05D905D705D4002005D105D005DE05E605E205D505EA0020004100630072006F006200610074002005D5002D00410064006F00620065002000520065006100640065007200200035002E0030002005D505D205E805E105D005D505EA002005DE05EA05E705D305DE05D505EA002005D905D505EA05E8002E05D005DE05D905DD002005DC002D005000440046002F0058002D0033002C002005E205D905D905E005D5002005D105DE05D305E805D905DA002005DC05DE05E905EA05DE05E9002005E905DC0020004100630072006F006200610074002E002005DE05E105DE05DB05D90020005000440046002005E905E005D505E605E805D5002005E005D905EA05E005D905DD002005DC05E405EA05D905D705D4002005D105D005DE05E605E205D505EA0020004100630072006F006200610074002005D5002D00410064006F00620065002000520065006100640065007200200035002E0030002005D505D205E805E105D005D505EA002005DE05EA05E705D305DE05D505EA002005D905D505EA05E8002E> /HRV (Za stvaranje Adobe PDF dokumenata najpogodnijih za visokokvalitetni ispis prije tiskanja koristite ove postavke. Stvoreni PDF dokumenti mogu se otvoriti Acrobat i Adobe Reader 5.0 i kasnijim verzijama.) /HUN <FEFF004b0069007600e1006c00f30020006d0069006e0151007300e9006701710020006e0079006f006d00640061006900200065006c0151006b00e90073007a00ed007401510020006e0079006f006d00740061007400e100730068006f007a0020006c006500670069006e006b00e1006200620020006d0065006700660065006c0065006c0151002000410064006f00620065002000500044004600200064006f006b0075006d0065006e00740075006d006f006b0061007400200065007a0065006b006b0065006c0020006100200062006500e1006c006c00ed007400e10073006f006b006b0061006c0020006b00e90073007a00ed0074006800650074002e0020002000410020006c00e90074007200650068006f007a006f00740074002000500044004600200064006f006b0075006d0065006e00740075006d006f006b00200061007a0020004100630072006f006200610074002000e9007300200061007a002000410064006f00620065002000520065006100640065007200200035002e0030002c0020007600610067007900200061007a002000610074007400f3006c0020006b00e9007301510062006200690020007600650072007a006900f3006b006b0061006c0020006e00790069007400680061007400f3006b0020006d00650067002e> /ITA <FEFF005500740069006c0069007a007a006100720065002000710075006500730074006500200069006d0070006f007300740061007a0069006f006e00690020007000650072002000630072006500610072006500200064006f00630075006d0065006e00740069002000410064006f00620065002000500044004600200070006900f900200061006400610074007400690020006100200075006e00610020007000720065007300740061006d0070006100200064006900200061006c007400610020007100750061006c0069007400e0002e0020004900200064006f00630075006d0065006e007400690020005000440046002000630072006500610074006900200070006f00730073006f006e006f0020006500730073006500720065002000610070006500720074006900200063006f006e0020004100630072006f00620061007400200065002000410064006f00620065002000520065006100640065007200200035002e003000200065002000760065007200730069006f006e006900200073007500630063006500730073006900760065002e> /JPN <FEFF9ad854c18cea306a30d730ea30d730ec30b951fa529b7528002000410064006f0062006500200050004400460020658766f8306e4f5c6210306b4f7f75283057307e305930023053306e8a2d5b9a30674f5c62103055308c305f0020005000440046002030d530a130a430eb306f3001004100630072006f0062006100740020304a30883073002000410064006f00620065002000520065006100640065007200200035002e003000204ee5964d3067958b304f30533068304c3067304d307e305930023053306e8a2d5b9a306b306f30d530a930f330c8306e57cb30818fbc307f304c5fc59808306730593002> /KOR <FEFFc7740020c124c815c7440020c0acc6a9d558c5ec0020ace0d488c9c80020c2dcd5d80020c778c1c4c5d00020ac00c7a50020c801d569d55c002000410064006f0062006500200050004400460020bb38c11cb97c0020c791c131d569b2c8b2e4002e0020c774b807ac8c0020c791c131b41c00200050004400460020bb38c11cb2940020004100630072006f0062006100740020bc0f002000410064006f00620065002000520065006100640065007200200035002e00300020c774c0c1c5d0c11c0020c5f40020c2180020c788c2b5b2c8b2e4002e> /LTH <FEFF004e006100750064006f006b0069007400650020016100690075006f007300200070006100720061006d006500740072007500730020006e006f0072011700640061006d00690020006b0075007200740069002000410064006f00620065002000500044004600200064006f006b0075006d0065006e007400750073002c0020006b00750072006900650020006c0061006200690061007500730069006100690020007000720069007400610069006b007900740069002000610075006b01610074006f00730020006b006f006b007900620117007300200070006100720065006e006700740069006e00690061006d00200073007000610075007300640069006e0069006d00750069002e0020002000530075006b0075007200740069002000500044004600200064006f006b0075006d0065006e007400610069002000670061006c006900200062016b007400690020006100740069006400610072006f006d00690020004100630072006f006200610074002000690072002000410064006f00620065002000520065006100640065007200200035002e0030002000610072002000760117006c00650073006e0117006d00690073002000760065007200730069006a006f006d00690073002e> /LVI <FEFF0049007a006d0061006e0074006f006a00690065007400200161006f00730020006900650073007400610074012b006a0075006d00750073002c0020006c0061006900200076006500690064006f00740075002000410064006f00620065002000500044004600200064006f006b0075006d0065006e007400750073002c0020006b006100730020006900720020012b00700061016100690020007000690065006d01130072006f00740069002000610075006700730074006100730020006b00760061006c0069007401010074006500730020007000690072006d007300690065007300700069006501610061006e006100730020006400720075006b00610069002e00200049007a0076006500690064006f006a006900650074002000500044004600200064006f006b0075006d0065006e007400750073002c0020006b006f002000760061007200200061007400760113007200740020006100720020004100630072006f00620061007400200075006e002000410064006f00620065002000520065006100640065007200200035002e0030002c0020006b0101002000610072012b00200074006f0020006a00610075006e0101006b0101006d002000760065007200730069006a0101006d002e> /NLD (Gebruik deze instellingen om Adobe PDF-documenten te maken die zijn geoptimaliseerd voor prepress-afdrukken van hoge kwaliteit. De gemaakte PDF-documenten kunnen worden geopend met Acrobat en Adobe Reader 5.0 en hoger.) /NOR <FEFF004200720075006b00200064006900730073006500200069006e006e007300740069006c006c0069006e00670065006e0065002000740069006c002000e50020006f0070007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e00740065007200200073006f006d00200065007200200062006500730074002000650067006e0065007400200066006f00720020006600f80072007400720079006b006b0073007500740073006b00720069006600740020006100760020006800f800790020006b00760061006c0069007400650074002e0020005000440046002d0064006f006b0075006d0065006e00740065006e00650020006b0061006e002000e50070006e00650073002000690020004100630072006f00620061007400200065006c006c00650072002000410064006f00620065002000520065006100640065007200200035002e003000200065006c006c00650072002000730065006e006500720065002e> /POL <FEFF0055007300740061007700690065006e0069006100200064006f002000740077006f0072007a0065006e0069006100200064006f006b0075006d0065006e007400f300770020005000440046002000700072007a0065007a006e00610063007a006f006e00790063006800200064006f002000770079006400720075006b00f30077002000770020007700790073006f006b00690065006a0020006a0061006b006f015b00630069002e002000200044006f006b0075006d0065006e0074007900200050004400460020006d006f017c006e00610020006f007400770069006500720061010700200077002000700072006f006700720061006d006900650020004100630072006f00620061007400200069002000410064006f00620065002000520065006100640065007200200035002e0030002000690020006e006f00770073007a0079006d002e> /PTB <FEFF005500740069006c0069007a006500200065007300730061007300200063006f006e00660069006700750072006100e700f50065007300200064006500200066006f0072006d00610020006100200063007200690061007200200064006f00630075006d0065006e0074006f0073002000410064006f0062006500200050004400460020006d00610069007300200061006400650071007500610064006f00730020007000610072006100200070007200e9002d0069006d0070007200650073007300f50065007300200064006500200061006c007400610020007100750061006c00690064006100640065002e0020004f007300200064006f00630075006d0065006e0074006f00730020005000440046002000630072006900610064006f007300200070006f00640065006d0020007300650072002000610062006500720074006f007300200063006f006d0020006f0020004100630072006f006200610074002000650020006f002000410064006f00620065002000520065006100640065007200200035002e0030002000650020007600650072007300f50065007300200070006f00730074006500720069006f007200650073002e> /RUM <FEFF005500740069006c0069007a00610163006900200061006300650073007400650020007300650074010300720069002000700065006e007400720075002000610020006300720065006100200064006f00630075006d0065006e00740065002000410064006f006200650020005000440046002000610064006500630076006100740065002000700065006e0074007200750020007400690070010300720069007200650061002000700072006500700072006500730073002000640065002000630061006c006900740061007400650020007300750070006500720069006f006100720103002e002000200044006f00630075006d0065006e00740065006c00650020005000440046002000630072006500610074006500200070006f00740020006600690020006400650073006300680069007300650020006300750020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e00300020015f00690020007600650072007300690075006e0069006c006500200075006c0074006500720069006f006100720065002e> /RUS <FEFF04180441043f043e043b044c04370443043904420435002004340430043d043d044b04350020043d0430044104420440043e0439043a043800200434043b044f00200441043e043704340430043d0438044f00200434043e043a0443043c0435043d0442043e0432002000410064006f006200650020005000440046002c0020043c0430043a04410438043c0430043b044c043d043e0020043f043e04340445043e0434044f04490438044500200434043b044f00200432044b0441043e043a043e043a0430044704350441044204320435043d043d043e0433043e00200434043e043f0435044704300442043d043e0433043e00200432044b0432043e04340430002e002000200421043e043704340430043d043d044b04350020005000440046002d0434043e043a0443043c0435043d0442044b0020043c043e0436043d043e0020043e0442043a0440044b043204300442044c002004410020043f043e043c043e0449044c044e0020004100630072006f00620061007400200438002000410064006f00620065002000520065006100640065007200200035002e00300020043800200431043e043b043504350020043f043e04370434043d043804450020043204350440044104380439002e> /SKY <FEFF0054006900650074006f0020006e006100730074006100760065006e0069006100200070006f0075017e0069007400650020006e00610020007600790074007600e100720061006e0069006500200064006f006b0075006d0065006e0074006f0076002000410064006f006200650020005000440046002c0020006b0074006f007200e90020007300610020006e0061006a006c0065007001610069006500200068006f0064006900610020006e00610020006b00760061006c00690074006e00fa00200074006c0061010d00200061002000700072006500700072006500730073002e00200056007900740076006f00720065006e00e900200064006f006b0075006d0065006e007400790020005000440046002000620075006400650020006d006f017e006e00e90020006f00740076006f00720069016500200076002000700072006f006700720061006d006f006300680020004100630072006f00620061007400200061002000410064006f00620065002000520065006100640065007200200035002e0030002000610020006e006f0076016100ed00630068002e> /SLV <FEFF005400650020006e006100730074006100760069007400760065002000750070006f0072006100620069007400650020007a00610020007500730074007600610072006a0061006e006a006500200064006f006b0075006d0065006e0074006f0076002000410064006f006200650020005000440046002c0020006b006900200073006f0020006e0061006a007000720069006d00650072006e0065006a016100690020007a00610020006b0061006b006f0076006f00730074006e006f0020007400690073006b0061006e006a00650020007300200070007200690070007200610076006f0020006e00610020007400690073006b002e00200020005500730074007600610072006a0065006e006500200064006f006b0075006d0065006e0074006500200050004400460020006a00650020006d006f0067006f010d00650020006f0064007000720065007400690020007a0020004100630072006f00620061007400200069006e002000410064006f00620065002000520065006100640065007200200035002e003000200069006e0020006e006f00760065006a01610069006d002e> /SUO <FEFF004b00e40079007400e40020006e00e40069007400e4002000610073006500740075006b007300690061002c0020006b0075006e0020006c0075006f00740020006c00e400680069006e006e00e4002000760061006100740069007600610061006e0020007000610069006e006100740075006b00730065006e002000760061006c006d0069007300740065006c00750074007900f6006800f6006e00200073006f00700069007600690061002000410064006f0062006500200050004400460020002d0064006f006b0075006d0065006e007400740065006a0061002e0020004c0075006f0064007500740020005000440046002d0064006f006b0075006d0065006e00740069007400200076006f0069006400610061006e0020006100760061007400610020004100630072006f0062006100740069006c006c00610020006a0061002000410064006f00620065002000520065006100640065007200200035002e0030003a006c006c00610020006a006100200075007500640065006d006d0069006c006c0061002e> /SVE <FEFF0041006e007600e4006e00640020006400650020006800e4007200200069006e0073007400e4006c006c006e0069006e006700610072006e00610020006f006d002000640075002000760069006c006c00200073006b006100700061002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400200073006f006d002000e400720020006c00e4006d0070006c0069006700610020006600f60072002000700072006500700072006500730073002d007500740073006b00720069006600740020006d006500640020006800f600670020006b00760061006c0069007400650074002e002000200053006b006100700061006400650020005000440046002d0064006f006b0075006d0065006e00740020006b0061006e002000f600700070006e00610073002000690020004100630072006f0062006100740020006f00630068002000410064006f00620065002000520065006100640065007200200035002e00300020006f00630068002000730065006e006100720065002e> /TUR <FEFF005900fc006b00730065006b0020006b0061006c006900740065006c0069002000f6006e002000790061007a006401310072006d00610020006200610073006b013100730131006e006100200065006e0020006900790069002000750079006100620069006c006500630065006b002000410064006f006200650020005000440046002000620065006c00670065006c0065007200690020006f006c0075015f007400750072006d0061006b0020006900e70069006e00200062007500200061007900610072006c0061007201310020006b0075006c006c0061006e0131006e002e00200020004f006c0075015f0074007500720075006c0061006e0020005000440046002000620065006c00670065006c0065007200690020004100630072006f006200610074002000760065002000410064006f00620065002000520065006100640065007200200035002e003000200076006500200073006f006e0072006100730131006e00640061006b00690020007300fc007200fc006d006c00650072006c00650020006100e70131006c006100620069006c00690072002e> /UKR <FEFF04120438043a043e0440043804410442043e043204430439044204350020044604560020043f043004400430043c043504420440043800200434043b044f0020044104420432043e04400435043d043d044f00200434043e043a0443043c0435043d044204560432002000410064006f006200650020005000440046002c0020044f043a04560020043d04300439043a04400430044904350020043f045604340445043e0434044f0442044c00200434043b044f0020043204380441043e043a043e044f043a04560441043d043e0433043e0020043f0435044004350434043404400443043a043e0432043e0433043e0020043404400443043a0443002e00200020042104420432043e04400435043d045600200434043e043a0443043c0435043d0442043800200050004400460020043c043e0436043d04300020043204560434043a0440043804420438002004430020004100630072006f006200610074002004420430002000410064006f00620065002000520065006100640065007200200035002e0030002004300431043e0020043f04560437043d04560448043e04570020043204350440044104560457002e> /ENU (Use these settings to create Adobe PDF documents best suited for high-quality prepress printing. Created PDF documents can be opened with Acrobat and Adobe Reader 5.0 and later.) /CHS <FEFF4f7f75288fd94e9b8bbe5b9a521b5efa7684002000410064006f006200650020005000440046002065876863900275284e8e9ad88d2891cf76845370524d53705237300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c676562535f00521b5efa768400200050004400460020658768633002> /Namespace [ (Adobe) (Common) (1.0) /OtherNamespaces [ /AsReaderSpreads false /CropImagesToFrames true /ErrorControl /WarnAndContinue /FlattenerIgnoreSpreadOverrides false /IncludeGuidesGrids false /IncludeNonPrinting false /IncludeSlug false /Namespace [ (Adobe) (InDesign) (4.0) /OmitPlacedBitmaps false /OmitPlacedEPS false /OmitPlacedPDF false /SimulateOverprint /Legacy /AddBleedMarks false /AddColorBars false /AddCropMarks false /AddPageInfo false /AddRegMarks false /ConvertColors /ConvertToCMYK /DestinationProfileName () /DestinationProfileSelector /DocumentCMYK /Downsample16BitImages true /FlattenerPreset << /PresetSelector /MediumResolution /FormElements false /GenerateStructure false /IncludeBookmarks false /IncludeHyperlinks false /IncludeInteractive false /IncludeLayers false /IncludeProfiles false /MultimediaHandling /UseObjectSettings /Namespace [ (Adobe) (CreativeSuite) (2.0) /PDFXOutputIntentProfileSelector /DocumentCMYK /PreserveEditing true /UntaggedCMYKHandling /LeaveUntagged /UntaggedRGBHandling /UseDocumentProfile /UseDocumentBleed false>> setdistillerparams /HWResolution [2400 2400] /PageSize [612.000 802.205]>> setpagedevice</s>
<s>A Potent Model to Recognize Bangla Sign Language Digits Using Convolutional Neural NetworkScienceDirectAvailable online at www.sciencedirect.com8th International Conference on Advances in Computing and Communication (ICACC-2018)Procedia Computer Science 143 (2018) 611–6181877-0509 © 2018 The Authors. Published by Elsevier B.V.This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/)Selection and peer-review under responsibility of the scientific committee of the 8th International Conference on Advances in Computing and Communication (ICACC-2018).10.1016/j.procs.2018.10.43810.1016/j.procs.2018.10.438© 2018 The Authors. Published by Elsevier B.V.This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/)Selection and peer-review under responsibility of the scientific committee of the 8th International Conference on Advances in Computing and Communication (ICACC-2018).1877-0509Available online at www.sciencedirect.comProcedia Computer Science 00 (2018) 000–000www.elsevier.com/locate/procedia8th International Conference on Advances in Computing and Communication (ICACC-2018)A Potent Model to Recognize Bangla Sign Language Digits UsingConvolutional Neural NetworkMd. Sanzidul Islama,∗, Sadia Sultana Sharmin Mousumia, AKM Shahariar Azad Rabbya,Sayed Akhter Hossaina, Sheikh AbujaraaDept. of Computer Science & Engineering, Daffodil International University, Dhaka- 1205, BangladeshAbstractHearing impaired people have own language called Sign Language but it is difficult for understanding to general people. Signlanguage is the basic method of communication for deaf people during their everyday of life. Sign digits are also a major part ofsign language. So machine translator is necessary to allow them to communicate with general people. For making their languageunderstandable to general people, computer vision based solutions are well known nowadays. In this research work we aims atconstructing a model in deep learning approach to recognize Bangla Sign Language (BdSL) digits. In this approach there usedConvolutional Neural Network (CNN) to train particular signs with a respective training dataset (Eshara-Lipi) for acquiring ouraim. The model trained and tested with respectively 860 training images and 215 (20%) test images of tent classes of digits. Finally,the training model gained about 95% accuracy at recognition of Bangla sign language digits. This model will contribute for movingone step forward to make BdSL machine translator.c© 2018 The Authors. Published by Elsevier B.V.This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/)Selection and peer-review under responsibility of the scientific committee of the 8th International Conference on Advances inComputing and Communication (ICACC-2018).Keywords: BdSL; Bangla Sign Language; CNN; Machine Learning; Deep Learning; NLP; Computer Vision; Pattern Recognition; Sign Digits;Sign Language1. IntroductionDeaf-mute is a term which was used historically to identify a person who was either deaf using a sign languageor both deaf and could not speak [1]. Both are only incapacitate at their hearing or speaking, hence they can domuch several things. Communication with the general people which is the only matter that distinct them. The hearingimpaired people can simply live like a general person if there is a way for communication between normal peopleand deaf people. Sign Language is the only way to communication between them. Although hearing impaired peoplewho have sense of sign language, can talk and hear completely. Sign digits are also useful for daily accounting and∗ Md. Sanzidul IslamE-mail address: sanzidul15-5223@diu.edu.bd1877-0509 c© 2018 The Authors. Published by Elsevier B.V.This is an open access article under</s>
<s>the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/)Selection and peer-review under responsibility of the scientific committee of the 8th International Conference on Advances in Computing andCommunication (ICACC-2018).Available online at www.sciencedirect.comProcedia Computer Science 00 (2018) 000–000www.elsevier.com/locate/procedia8th International Conference on Advances in Computing and Communication (ICACC-2018)A Potent Model to Recognize Bangla Sign Language Digits UsingConvolutional Neural NetworkMd. Sanzidul Islama,∗, Sadia Sultana Sharmin Mousumia, AKM Shahariar Azad Rabbya,Sayed Akhter Hossaina, Sheikh AbujaraaDept. of Computer Science & Engineering, Daffodil International University, Dhaka- 1205, BangladeshAbstractHearing impaired people have own language called Sign Language but it is difficult for understanding to general people. Signlanguage is the basic method of communication for deaf people during their everyday of life. Sign digits are also a major part ofsign language. So machine translator is necessary to allow them to communicate with general people. For making their languageunderstandable to general people, computer vision based solutions are well known nowadays. In this research work we aims atconstructing a model in deep learning approach to recognize Bangla Sign Language (BdSL) digits. In this approach there usedConvolutional Neural Network (CNN) to train particular signs with a respective training dataset (Eshara-Lipi) for acquiring ouraim. The model trained and tested with respectively 860 training images and 215 (20%) test images of tent classes of digits. Finally,the training model gained about 95% accuracy at recognition of Bangla sign language digits. This model will contribute for movingone step forward to make BdSL machine translator.c© 2018 The Authors. Published by Elsevier B.V.This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/)Selection and peer-review under responsibility of the scientific committee of the 8th International Conference on Advances inComputing and Communication (ICACC-2018).Keywords: BdSL; Bangla Sign Language; CNN; Machine Learning; Deep Learning; NLP; Computer Vision; Pattern Recognition; Sign Digits;Sign Language1. IntroductionDeaf-mute is a term which was used historically to identify a person who was either deaf using a sign languageor both deaf and could not speak [1]. Both are only incapacitate at their hearing or speaking, hence they can domuch several things. Communication with the general people which is the only matter that distinct them. The hearingimpaired people can simply live like a general person if there is a way for communication between normal peopleand deaf people. Sign Language is the only way to communication between them. Although hearing impaired peoplewho have sense of sign language, can talk and hear completely. Sign digits are also useful for daily accounting and∗ Md. Sanzidul IslamE-mail address: sanzidul15-5223@diu.edu.bd1877-0509 c© 2018 The Authors. Published by Elsevier B.V.This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/)Selection and peer-review under responsibility of the scientific committee of the 8th International Conference on Advances in Computing andCommunication (ICACC-2018).http://crossmark.crossref.org/dialog/?doi=10.1016/j.procs.2018.10.438&domain=pdf612 Sanzidul Islam et al. / Procedia Computer Science 143 (2018) 611–6182 Md. Sanzidul Islam et al. / Procedia Computer Science 00 (2018) 000–000for communicating the general people and deaf community.Sign language is a visual language that uses hand shapes, facial expression, gestures and body language [2]. Deafpeople share their feeling with various hand shapes and movement in general. A</s>
<s>huge amount of research has beendone in the field of recognizing Sign Language using different techniques like Hidden Markov Models, skeletondetection, Principal Component analysis (PCA) etc. [3] [4] [5]. Other great techniques involve fulfill the motionhistory report associated with gestures, motion capturing gloves and computer vision connected with various coloredgloves [6][7].In our approach, there used CNN for data classification. A Convolutional Neural Network (CNN, or ConvNet) is aclass of deep, feed-forward artificial neural networks that has successfully been applied for analyzing visual imagery[8]. CNNs use relatively little pre-processing compared to other image classification algorithms. This means that thenetwork learns the filters that in traditional algorithms were hand-engineered.Fig. 1. Bangla sign language digits.In our work- first, we speak literature review, then describe our model preparation, then discussion about modeloptimization and finally the evaluation of model.2. Literature ReviewThis research goals to construct a model that will identify numbers of BdSL. For recognizing various sign multipleapproaches have been used by several researchers which were accomplished in different area.A New Approach of Sign Language Recognition System for Bilingual Users [9] can recognize 11 Bengali digitsand 16 words. There they proposed an universal interpreter software for skin detection & feature extraction. Theirsystem using a database of (27x10x20) images.Numbers have been recognized effectively in Indian Sign Language Recognition [10]. They represented a frameworkfor a HCI capable of recognizing signs from Indian sign language with PCA (Principle Component Analysis).In Sign Language Recognition using Microsoft Kinect [11] paper, they used computer vision algorithms and builda characteristics depth and motion profile for each sign language digits 0-9. The feature matrix they generated wastrained with SVM classifier. But this approach has a dependency on specific camera device.Fine Hand Segmentation using Convolutional Neural Networks [12] proposed a method for recognition very accuratehands gesture views based on Deep Learning architecture. In their model they mapped convolution layers directly toa segmentation mask with a fully connected layer. They tried to implement it as efficient in real time as possible.A recent work was done for recognizing Nigeria indigenous sign language. There they introduced an Yoruba SignLanguage recognition system [13] using image processing and Artificial Neural Networks (ANN).3. Proposed MethodologyA neural net is used in this system to recognize hand signs which is Convolutional Neural Network. The neural netlayer explanation, dataset properties, data process, model training and many other methodology is discussed in thissection. Sanzidul Islam et al. / Procedia Computer Science 143 (2018) 611–618 613Md. Sanzidul Islam et al. / Procedia Computer Science 00 (2018) 000–000 33.1. Dataset propertiesThe Eshara-Lipi dataset which was collected for this project we used to train the model. Eshara-Lipi dataset con-tains Bangla Sign Language digits from 0 to 9 (0, 1, 2 . . . 9). The dataset has following properties-• Every class has 100 different images of different peoples hand.• Ishara-Lipi Dataset has total 1000 (10 * 100 = 1000) images.• All sign images is cropped and resized by 128 x 128 pixels.• Dataset images is formatted in.JPG format.• Images are gray scale and binary coloured then did</s>
<s>some preprocessing works.Fig. 2. Eshara-Lipi dataset samples.3.2. Data PreprocessingThe Eshara-Lipi dataset provides 128 x 128 pixels gray scale images. Some preprocessing works were done formaking it usable to train model. Firstly all images were resized by 28 x 28 pixel size. The images were converted intogray scale, then binary coloured image and given the correct labels. Finally saved the image pixels into a CSV file toreduce needed computation power. The method we used determines the threshold automatically from the image usingOtsu’s method.Fig. 3. Data preprocessing by Otsu’s method.3.3. Designing The ModelTo recognize these digits here used multi-layer convolutional neural networks which are connected each other.The model is represented by multi layered CNN with two sub layers. First two layers are same, there have two614 Sanzidul Islam et al. / Procedia Computer Science 143 (2018) 611–6184 Md. Sanzidul Islam et al. / Procedia Computer Science 00 (2018) 000–000convolution layers with same padding and swish (3) activation using 32 filters and 5:5 kernel. Then also added amax-pooling layer there. The max-pooling layer has 2x2 followed by 25% dropout layer. All dropout layers used hereis for reducing overfitting. The model also use ADAM optimizer [14].The previous two conv layers generate output and then the output from this two layers goes as an input of two sub-layers. The both sublayers contain same 2 convolutional layers with the same swish activation, padding and 64 filterswith a 5x5 kernel, followed by another convolutional layer with a 3x3 kernel. The output of last 2 sub convolutionallayers added together and go through a Max-Pooling layer. This Max-pool has 20% dropout [15] layer.Then flatten the layers and used a fully connected dense layer with 2048 hidden nodes. Final output layer has 50nodes with SoftMax (1) activation. Using softmax activation means playing with the logistic regression on the featureextraction before the finally connected layer.S (yi) =eyij eyi(1)In this stage of model, a flatten function is used for shape optimization. The basic concept of applying flatten anddense layer function and its output pattern is shown below (Fig.4 and Fig.5).Fig. 4. Dense layer effect.Fig. 5. Applying flatten on dataset.3.4. ActivationNowadays the most commonly used activation function is ReLU (2) by default. The ReLU function is defined byequation is-ReLu(x) = Max(0, x) (2)The ReLU activation assigns the parameter back to itself. It creates the problem of ”dead neurons”. There hassome better proposed alternative, such as the ELU, SELU and others. Another activation function is used nowadaysfor efficiency is named Swish activation (3). It’s very simple in equation-S wish(x) = xσ(x) (3) Sanzidul Islam et al. / Procedia Computer Science 143 (2018) 611–618 615Md. Sanzidul Islam et al. / Procedia Computer Science 00 (2018) 000–000 5Fig.6 is showing the architecture of the whole neural network-Fig. 6. CNN model architecture.3.5. Model OptimizationModel optimization is used for making the model more efficient and reliable to input data. In this deep learningmodel here also applied some optimization techniques. Used SGD for compiling model as optimizer. StochasticGradient Descent (sgd) performs as a parameter for each training example. It</s>
<s>is a much faster technique. It usuallyperforms single update at a single time.The cross-entropy is a better choice for cost function optimization. It is known as cross-entropy cost function; alsocalled regularization method. For making better classification and prediction in neural network this function is usedwidely. Here used a categorical cross entropy as loss function (4).Li =ti, j log(pi, j) (4)3.6. Model SummeryTable 1. (Summery of model) All layers, their output shape, parameters and the layer with connected.Layer No. (type) Output Shape Param Connected to1 Input (InputLayer) (None, 28, 28, 1) 0 -2 conv2d 1 (Conv2D) (None, 28, 28, 32) 832 Input[0][0]3 conv2d 2 (Conv2D) (None, 28, 28, 32) 25632 conv2d 1[0][0]4 max pooling2d 1 (MaxPooling2D) (None, 14, 14, 32) 0 conv2d 2[0][0]5 dropout 1 (Dropout) (None, 14, 14, 32) 0 max pooling2d 1[0][0]6 conv2d 3 (Conv2D) (None, 14, 14, 64) 51264 dropout 1[0][0]7 batch normalization 1 (BatchNor) (None, 14, 14, 64) 256 conv2d 3[0][0]8 conv2d 4 (Conv2D) (None, 14, 14, 64) 36928 batch normalization 1[0][0]9 conv2d 5 (Conv2D) (None, 14, 14, 64) 51264 dropout 1[0][0]10 batch normalization 2 (BatchNor (None, 14, 14, 64) 256 conv2d 4[0][0]11 conv2d 6 (Conv2D) (None, 14, 14, 64) 36928 conv2d 5[0][0]616 Sanzidul Islam et al. / Procedia Computer Science 143 (2018) 611–6186 Md. Sanzidul Islam et al. / Procedia Computer Science 00 (2018) 000–000Layer No. (type) Output Shape Param Connected to12 max pooling2d 2 (MaxPooling2D) (None, 7, 7, 64) 0 batch normalization 2[0][0]13 max pooling2d 3 (MaxPooling2D) (None, 7, 7, 64) 0 conv2d 6[0][0]14 dropout 2 (Dropout) (None, 7, 7, 64) 0 max pooling2d 2[0][0]15 dropout 3 (Dropout) (None, 7, 7, 64) 0 max pooling2d 3[0][0]16 add 1 (Add) (None, 7, 7, 64) 0 dropout 2[0][0]dropout 3[0][0]17 conv2d 7 (Conv2D) (None, 7, 7, 64) 36928 add 1[0][0]18 max pooling2d 4 (MaxPooling2D) (None, 3, 3, 64) 0 conv2d 7[0][0]19 dropout 4 (Dropout) (None, 3, 3, 64) 0 max pooling2d 4[0][0]20 flatten 1 (Flatten) (None, 576) 0 dropout 4[0][0]21 dense 1 (Dense) (None, 2048) 1181696 flatten 1[0][0]22 Fully connected 2 (Dense) (None, 10) 20490 dense 1[0][0]Total params: 1,442,474Trainable params: 1,442,218Non-trainable params: 2564. Model EvaluationThe model developed with Ishara-Lipi dataset performed 94.88% validation accuracy and 95.35% training accu-racy. As occurred the training loss and validation loss is shown below in table and graphical representation.Table 2. Training and Validation.Evaluation RateTraining Loss 12.38%Validation Loss 26.13%Training Accuracy 95.35%Validation Accuracy 94.88%Fig. 7. Graphical view for accuracy and loss. Sanzidul Islam et al. / Procedia Computer Science 143 (2018) 611–618 617Md. Sanzidul Islam et al. / Procedia Computer Science 00 (2018) 000–000 7A confusion matrix or error matrix according to model is given below in fig.8-Fig. 8. Confusion matrix output graph.5. Conclusion and Future WorkThis paper represent a deep learning based Bengali Sign Language Digit Recognition System. For sign recognitionmethods, vision-based models and digit identification methods, convolutional neural network proves a strong candi-dature. The proposed models deliver output in text form which support to remove the communication interruptionbetween hearing impaired and general people. For standardization of the Bangla Sign Language, we want to</s>
<s>use ourdataset and the model as a platform. However, everyone cant understand sign language, in future we will changeconversation to sign for pleasant communication between different users. In Future we will reach our database &recognize more characters, even to recognize gesture of the Bangla Sign Language and to convert them to Bangla text.References[1] https : //en.wikipedia.org/wiki/Dea f − mute [Last accessed in 30th April 2018].[2] https : //www.ndcs.org.uk/ f amilysupport/communication/signlanguage/whati ssign.html [Last accessed in 30th April 2018].[3] Oliveira, V. A., and A. Conci. ”Skin Detection using HSV color space.” H. Pedrini, & J. Marques de Carvalho, Workshops of Sibgrapi. 2009.[4] B. D. Zarit, B. J. Super and F. K. H. Quek, ”Comparison of five color models in skin pixel classification,” Recognition, Analysis, and Trackingof Faces and Gestures in Real-Time Systems, 1999. Proceedings. International Workshop on, Corfu, 1999, pp. 58-63.[5] S. N. Sawant and M. S. Kumbhar, ”Real time Sign Language Recognition using PCA,” 2014 IEEE International Conference on AdvancedCommunications, Control and Computing Technologies, Ramanathapuram, 2014, pp. 1412-1415.[6] Zhang, Hao, Wen Xiao Du, and Haoran Li. ”Kinect gesture recognition for interactive system.” Stanford University Term Paper for CS 299(2012).[7] Parton, Becky Sue. ”Sign language recognition and translation: A multidisciplined approach from the field of artificial intelligence.” Journalof deaf studies and deaf education 11.1 (2005): 94-101.[8] https : //en.wikipedia.org/wiki/Convolutionalneuralnetwork [Last accessed in 30th April 2018].[9] F S. M. K. Hasan and M. Ahmad, ”A new approach of sign language recognition system for bilingual users,” 2015 International Conferenceon Electrical & Electronic Engineering (ICEEE), Rajshahi, 2015, pp. 33-36.[10] D. Deora and N. Bajaj, ”Indian sign language recognition,” 2012 1st International Conference on Emerging Technology Trends in Electronics,Communication & Networking, Surat, Gujarat, India, 2012, pp. 1-5.[11] A. Agarwal and M. K. Thakur, ”Sign language recognition using Microsoft Kinect,” 2013 Sixth International Conference on ContemporaryComputing (IC3), Noida, 2013, pp. 181-185.[12] Vodopivec, Tadej, Vincent Lepetit, and Peter Peer. ”Fine hand segmentation using convolutional neural networks.” arXiv preprintarXiv:1608.07454 (2016).[13] Oyewole, Ogunsanwo Gbenga, et al. ”Bridging Communication Gap Among People with Hearing Impairment: An Application of ImageProcessing and Artificial Neural Network.” International Journal of Information and Communication Sciences 3.1 (2018): 11.[14] Kingma, Diederik P. and Ba, Jimmy. Adam: A Method for Stochastic Optimization. arXiv:1412.6980 [cs.LG], December 2014.618 Sanzidul Islam et al. / Procedia Computer Science 143 (2018) 611–6188 Md. Sanzidul Islam et al. / Procedia Computer Science 00 (2018) 000–000[15] Srivastava, Nitish & Hinton, Geoffrey & Krizhevsky, Alex & Sutskever, Ilya & Salakhutdinov, Ruslan. (2014).Dropout: A Simple Way toPrevent Neural Networks from Overfitting. Journal of Machine Learning Research.15.1929-1958.</s>
<s>A Simple and Mighty Arrowhead Detection Technique of Bangla Sign Language Characters with CNNSee discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/334562271A Simple and Mighty Arrowhead Detection Technique of Bangla SignLanguage Characters with CNNChapter · July 2019DOI: 10.1007/978-981-13-9181-1_38CITATIONREADS2704 authors:Some of the authors of this publication are also working on these related projects:Incept-N: A Convolutional Neural Network based Classification Approach for Predicting Nationality from Facial Features View projectBangladeshi Stock Price Prediction and Analysis with Potent Machine Learning Approaches View projectMd. Sanzidul IslamDaffodil International University18 PUBLICATIONS 41 CITATIONS SEE PROFILESadia Sultana SharminDaffodil International University7 PUBLICATIONS 20 CITATIONS SEE PROFILEAkm Shahariar Azad RabbyDaffodil International University30 PUBLICATIONS 55 CITATIONS SEE PROFILESyed Akhter HossainDaffodil International University99 PUBLICATIONS 476 CITATIONS SEE PROFILEAll content following this page was uploaded by Akm Shahariar Azad Rabby on 21 July 2019.The user has requested enhancement of the downloaded file.https://www.researchgate.net/publication/334562271_A_Simple_and_Mighty_Arrowhead_Detection_Technique_of_Bangla_Sign_Language_Characters_with_CNN?enrichId=rgreq-9fa8e3a7c7aedf70df9543942f7b851c-XXX&enrichSource=Y292ZXJQYWdlOzMzNDU2MjI3MTtBUzo3ODMxNzQ5MDkwNTA4ODFAMTU2MzczNDg1Nzc2NA%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/334562271_A_Simple_and_Mighty_Arrowhead_Detection_Technique_of_Bangla_Sign_Language_Characters_with_CNN?enrichId=rgreq-9fa8e3a7c7aedf70df9543942f7b851c-XXX&enrichSource=Y292ZXJQYWdlOzMzNDU2MjI3MTtBUzo3ODMxNzQ5MDkwNTA4ODFAMTU2MzczNDg1Nzc2NA%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Incept-N-A-Convolutional-Neural-Network-based-Classification-Approach-for-Predicting-Nationality-from-Facial-Features?enrichId=rgreq-9fa8e3a7c7aedf70df9543942f7b851c-XXX&enrichSource=Y292ZXJQYWdlOzMzNDU2MjI3MTtBUzo3ODMxNzQ5MDkwNTA4ODFAMTU2MzczNDg1Nzc2NA%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Bangladeshi-Stock-Price-Prediction-and-Analysis-with-Potent-Machine-Learning-Approaches?enrichId=rgreq-9fa8e3a7c7aedf70df9543942f7b851c-XXX&enrichSource=Y292ZXJQYWdlOzMzNDU2MjI3MTtBUzo3ODMxNzQ5MDkwNTA4ODFAMTU2MzczNDg1Nzc2NA%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-9fa8e3a7c7aedf70df9543942f7b851c-XXX&enrichSource=Y292ZXJQYWdlOzMzNDU2MjI3MTtBUzo3ODMxNzQ5MDkwNTA4ODFAMTU2MzczNDg1Nzc2NA%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Islam939?enrichId=rgreq-9fa8e3a7c7aedf70df9543942f7b851c-XXX&enrichSource=Y292ZXJQYWdlOzMzNDU2MjI3MTtBUzo3ODMxNzQ5MDkwNTA4ODFAMTU2MzczNDg1Nzc2NA%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Islam939?enrichId=rgreq-9fa8e3a7c7aedf70df9543942f7b851c-XXX&enrichSource=Y292ZXJQYWdlOzMzNDU2MjI3MTtBUzo3ODMxNzQ5MDkwNTA4ODFAMTU2MzczNDg1Nzc2NA%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Daffodil_International_University?enrichId=rgreq-9fa8e3a7c7aedf70df9543942f7b851c-XXX&enrichSource=Y292ZXJQYWdlOzMzNDU2MjI3MTtBUzo3ODMxNzQ5MDkwNTA4ODFAMTU2MzczNDg1Nzc2NA%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Islam939?enrichId=rgreq-9fa8e3a7c7aedf70df9543942f7b851c-XXX&enrichSource=Y292ZXJQYWdlOzMzNDU2MjI3MTtBUzo3ODMxNzQ5MDkwNTA4ODFAMTU2MzczNDg1Nzc2NA%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Sadia_Sultana_Sharmin?enrichId=rgreq-9fa8e3a7c7aedf70df9543942f7b851c-XXX&enrichSource=Y292ZXJQYWdlOzMzNDU2MjI3MTtBUzo3ODMxNzQ5MDkwNTA4ODFAMTU2MzczNDg1Nzc2NA%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Sadia_Sultana_Sharmin?enrichId=rgreq-9fa8e3a7c7aedf70df9543942f7b851c-XXX&enrichSource=Y292ZXJQYWdlOzMzNDU2MjI3MTtBUzo3ODMxNzQ5MDkwNTA4ODFAMTU2MzczNDg1Nzc2NA%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Daffodil_International_University?enrichId=rgreq-9fa8e3a7c7aedf70df9543942f7b851c-XXX&enrichSource=Y292ZXJQYWdlOzMzNDU2MjI3MTtBUzo3ODMxNzQ5MDkwNTA4ODFAMTU2MzczNDg1Nzc2NA%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Sadia_Sultana_Sharmin?enrichId=rgreq-9fa8e3a7c7aedf70df9543942f7b851c-XXX&enrichSource=Y292ZXJQYWdlOzMzNDU2MjI3MTtBUzo3ODMxNzQ5MDkwNTA4ODFAMTU2MzczNDg1Nzc2NA%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Akm_Shahariar_Azad_Rabby?enrichId=rgreq-9fa8e3a7c7aedf70df9543942f7b851c-XXX&enrichSource=Y292ZXJQYWdlOzMzNDU2MjI3MTtBUzo3ODMxNzQ5MDkwNTA4ODFAMTU2MzczNDg1Nzc2NA%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Akm_Shahariar_Azad_Rabby?enrichId=rgreq-9fa8e3a7c7aedf70df9543942f7b851c-XXX&enrichSource=Y292ZXJQYWdlOzMzNDU2MjI3MTtBUzo3ODMxNzQ5MDkwNTA4ODFAMTU2MzczNDg1Nzc2NA%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Daffodil_International_University?enrichId=rgreq-9fa8e3a7c7aedf70df9543942f7b851c-XXX&enrichSource=Y292ZXJQYWdlOzMzNDU2MjI3MTtBUzo3ODMxNzQ5MDkwNTA4ODFAMTU2MzczNDg1Nzc2NA%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Akm_Shahariar_Azad_Rabby?enrichId=rgreq-9fa8e3a7c7aedf70df9543942f7b851c-XXX&enrichSource=Y292ZXJQYWdlOzMzNDU2MjI3MTtBUzo3ODMxNzQ5MDkwNTA4ODFAMTU2MzczNDg1Nzc2NA%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Syed_Hossain5?enrichId=rgreq-9fa8e3a7c7aedf70df9543942f7b851c-XXX&enrichSource=Y292ZXJQYWdlOzMzNDU2MjI3MTtBUzo3ODMxNzQ5MDkwNTA4ODFAMTU2MzczNDg1Nzc2NA%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Syed_Hossain5?enrichId=rgreq-9fa8e3a7c7aedf70df9543942f7b851c-XXX&enrichSource=Y292ZXJQYWdlOzMzNDU2MjI3MTtBUzo3ODMxNzQ5MDkwNTA4ODFAMTU2MzczNDg1Nzc2NA%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Daffodil_International_University?enrichId=rgreq-9fa8e3a7c7aedf70df9543942f7b851c-XXX&enrichSource=Y292ZXJQYWdlOzMzNDU2MjI3MTtBUzo3ODMxNzQ5MDkwNTA4ODFAMTU2MzczNDg1Nzc2NA%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Syed_Hossain5?enrichId=rgreq-9fa8e3a7c7aedf70df9543942f7b851c-XXX&enrichSource=Y292ZXJQYWdlOzMzNDU2MjI3MTtBUzo3ODMxNzQ5MDkwNTA4ODFAMTU2MzczNDg1Nzc2NA%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Akm_Shahariar_Azad_Rabby?enrichId=rgreq-9fa8e3a7c7aedf70df9543942f7b851c-XXX&enrichSource=Y292ZXJQYWdlOzMzNDU2MjI3MTtBUzo3ODMxNzQ5MDkwNTA4ODFAMTU2MzczNDg1Nzc2NA%3D%3D&el=1_x_10&_esc=publicationCoverPdfA Simple and Mighty ArrowheadDetection Technique of Bangla SignLanguage Characters with CNNMd. Sanzidul Islam(B), Sadia Sultana Sharmin Mousumi,AKM Shahariar Azad Rabby, and Syed Akhter HossainDepartment of Computer Science and Engineering, Daffodil International University,Dhaka, Bangladesh{sanzidul15-5223,sadia15-5191,azad15-5424}@diu.edu.bd,aktarhossain@daffodilvarsity.edu.bdAbstract. Sign Language is argued as the first Language for hearingimpaired people. It is the most physical and obvious way for the deafand dumb people who have speech and hearing problems to convey them-selves and general people. So, an interpreter is wanted whereas a generalpeople needs to communicate with a deaf and dumb person. In respectto Bangladesh, 2.4 million people uses sign language but the works areextremely few for Bangladeshi Sign Language (BdSL). In this paper, weattempt to represent a BdSL recognition model which are constructedusing of 50 sets of hand sign images. Bangla Sign alphabets are identi-fied by resolving its shape and assimilating its structures that abstracteach sign. In proposed model, we used multi-layered Convolutional Neu-ral Network (CNN). CNNs are able to automate the method of structureformulation. Finally the model gained 92% accuracy on our dataset.Keywords: Bangla Sign Language · NLP · Computer vision ·Machine learning · Image processing · Sign language characters ·BdSL · BSL · CNN · Pattern recognition1 IntroductionDeaf is an incapability that emasculate their hearing and establish them disableto listen [1], while mute is an incapability that emasculate their speaking andestablish them disable to talk [2]. Both of them can’t just speak and hear butcan do much all other things. One thing that has separated them from ordinarypeople is communications. The hearing impaired people live like a normal peopleif there was any way to communicate. Sign Language is the only way for deafand mute to communicate.A sign Language is a language which is represented by alliance of gesture ormovement of the hands. Sign Language is the visual language because of thesesign language and spoken language both is different. In real world, people facec© Springer Nature Singapore Pte Ltd. 2019K. C. Santosh and R. S. Hegadi (Eds.): RTIP2R 2018, CCIS 1035, pp. 429–437, 2019.https://doi.org/10.1007/978-981-13-9181-1_38http://crossmark.crossref.org/dialog/?doi=10.1007/978-981-13-9181-1_38&domain=pdfhttps://doi.org/10.1007/978-981-13-9181-1_38430 Md. S. Islam et al.Fig. 1. Bangla sign language characters sign.different gestures, Fig. 1 has some example. Different country has different signlanguages rely on their alphabets and native expression.</s>
<s>There are various signlanguage for example American, Arabic, French, Spanish, Chinese, and Indianetc.In Bangladesh where around 2.4 million people use Bengali Sign Language.But normal people are not accustomed with their sign. For effective communi-cation speech and hearing impaired people and normal people must have thesimilar set of knowledge for an individual gesture. It is difficult for the deaf andmute people to learn their sign as there is no appropriate model that work asa communication method. For this, a distance has been created in the society.Bangla Sign Language that creates sign more difficult to realize. So it is essentialfor construct a model which is convert the sign language to text that supportedthe mute people to communicate with general people and each other. Now a days,Bangladeshi Sign Language Recognition (BdSL) becomes one of the challengingtopics in the area of machine learning and computer vision.In this paper, a CNN based Sign Language Recognition is offer to acquirehighest recognition rate. Here we have focused on static hand gesture in BanglaSign Language (BSL) which is still challenging because of it visually same yetseveral sign. So we receive advantage of convolutional neural networks to fulfila real time and appropriate sign language recognition system. It is mentioningthat we can eliminate the obstacle of moving hands from background for handsbecause CNNs have the ability to learn structures automatically from raw datawithout any prior knowledge [3].2 Literature ReviewConvolutional Neural Networks have been really effective in image recognitionand classification problems, and have been effectively executed for human signrecognition in recent years [4]. Automatic Sign Language Finger Spelling usesCNN architecture from Kinect Depth images. The system trained CNNs for theclassification of 24 alphabets and 0–9 numbers using 33000 images and trainedthe classifier with different parameter configurations [5].A Simple and Mighty Arrowhead Detection Technique of BdSL 431Kang et al. take the extremely well-organized primary step of automatic fin-gerspelling recognition system using convolutional neural networks (CNNs) fromdepth maps. In this work, they consider comparatively larger number of classesrelated with the forgoing literature. They train CNNs for the classification of31 alphabets and numbers using a subset of collected depth data from multiplesubjects [6]. In Deep Convolutional Neural Networks for Sign Language Recogni-tion paper they proposed a CNN architecture for classifying selfie sign languagegestures. A stochastic pooling method is applied which pools the benefits ofboth max and mean pooling techniques. They generates the selfie sign languagedatabase of 200 ISL sign with 5 signers in 5 user dependent viewing angles for 2sec each at 30 fps generating a total of 300000 sign video frames [7].Hosoe et al. demonstrated a structure for recognition of static finger spellingson images. This recognition of hand gestures is done using a convolutional neuralnetwork, which has been trained using physical images. They recorded 5000images with static finger spellings from Japanese Sign Language [8]. Huang etal. developed a 3D CNN model for sign language recognition that acquires andremoves temporal features by performing 3D convolutions. They use multilayerperceptron classifier to classify these feature demonstrations [9]. A voice/textformat architecture is being proposed using the neural</s>
<s>networks identificationto translate the sign language and introduce the Point of Interest (POI) andtrajectory idea delivers originality and cuts the storage memory condition inReal-time Sign Language Recognition based on Neural Network Architecturepaper [10].Pigou(B) et al. contribute a recognition system using the Microsoft Kinect,convolutional neural networks (CNNs) and GPU acceleration and making com-plex handcrafted features. They were able to recognize 20 Italian gestures with91.7% accuracy [11]Tsai and Huang use Support Vector Machine (SVM) to recognize the staticsign and put on HMM model to classify the dynamic signs and they expended thefinite state machine to confirm the correctness of the grammar of the recognizedTSL sentence [12].Yasir et al. measured leap motion controller to take the continuous frameand preprocessed structure and they take out the vital features from hand andfingers using LMC. They presented segmented HMM to discrete sign of expres-sion from the constant frame by transition states. Next fetching the expression,they executed all the features as an input layer and accepted all of them as thelimit to the convolutional layer [13].In this paper, we develop CNN based recognition system which is significantalgorithm for object recognition.3 Proposed MethodologyA neural net is used in this system to recognize hand signs which is Convolu-tion Neural Network. The neural net layer explanation, dataset properties, dataprocess, model training and many other methodology is discussed in this section.432 Md. S. Islam et al.3.1 Dataset PropertiesThe Eshara-Lipi dataset which was collected for this project we used to trainthe model. Eshara-Lipi dataset contains Bangla Sign Language characters from0 to 35 (0, 1, 2 . . 36) (Fig. 2).The dataset has following properties.– Every class has 50 different images of different people’s hand.– Ishara-Lipi Dataset has total 1800 (36 * 50 = 1800) images.– All sign images is cropped and resized by 128 * 128 pixels.– Dataset images is formatted in .JPG format.Fig. 2. Bangla sign language characters dataset samples.3.2 Data PreprocessingThe Eshara-Lipi dataset provides 128 * 128 pixels grayscale images. For makingthis model did some reprocessing works like - convert grayscale image to binaryand threshold. The method we used determines the threshold automatically fromthe image using Otsu’s method.3.3 Model PreparationAlgorithm 1:1: Convolution 1 (Filter, Kernel Size, Stride, Padding, Activation)2: Convolution 2 (Filter, Kernel Size, Stride, Padding, Activation)3: Convolution 3 (Filter, Kernel Size, Stride, Padding, Activation)4: Convolution 4 (Filter, Kernel Size, Stride, Padding, Activation)A Simple and Mighty Arrowhead Detection Technique of BdSL 4335: Convolution 5 (Filter, Kernel Size, Stride, Padding, Activation)6: Convolution 6 (Filter, Kernel Size, Stride, Padding, Activation)7: Convolution 7 (Filter, Kernel Size, Stride, Padding, Activation)8: Convolution 8 (Filter, Kernel Size, Stride, Padding, Activation)9: Convolution 9 (Filter, Kernel Size, Stride, Padding, Activation)10: Convolution 10 (Filter, Kernel Size, Stride, Padding, Activation)11: Flatten (data format)12: Dense (Units, Activation, Kernel initializer, Bias Initializer)13: Dropout (Rate)14: Dense (Units, Activation, Kernel initializer, Bias Initializer)15: Dropout (Rate)16: Dense (Units, Activation, Kernel initializer, Bias Initializer)17: end forProposed Model in this paper use ADAM optimizer with a learning rate of0.001. The model has multi layered CNN. For convolution 1 and 2, where filtersize is 30, kernel size is</s>
<s>(3× 3), Stride is (1× 1), “same” padding with ReLU (1)activation. Followed 20, 60 filter size and 3, 5, 7 kernel size in other conv layers.Then used 25% dropout to reduce overfitting.ReLu(x) = Max(0, x) (1)For convolution 3, 4 and 5, the filter is 20, kernel size is (3× 3), (5× 5) and(7× 7), Stride is (1× 1), “same” padding with ReLU activation. Then used 25%dropout. Then flatten the layer and use a Dense layer with 2560 units with ReLUactivation and 50 % dropout. At final output layer, used 36 units with SoftMax(2) activation.S(yi) =eyij eyi(2)Densed layer is actually the linear operation on the layer’s input vector. Itworks as below (Fig. 3).Fig. 3. Densed layer working method.The flattening step is needed so that we can make use of fully connectedlayers after some convolutional layers (Fig. 4).434 Md. S. Islam et al.Fig. 4. Flattening layer working method.Then finally the whole model architecture could be shown in a picture.Figure 5 is showing the neural network architecture.Fig. 5. The whole model architecture.3.4 Model Optimization and Learning RateThe choice of optimization algorithm can make a sufficient change for the resultin Deep Learning and computer vision work. The Adam paper says, “...manyobjective functions are composed of a sum of subfunctions evaluated at differentsubsamples of data; in this case, optimization can be made more efficient bytaking gradient steps w.r.t. individual sub-functions ...”. The Adam optimizationalgorithm is an extension to stochastic gradient descent that recently adoptingmost of the computer vision and natural language processing application. Themethod computes individual adaptive learning rates for different parametersA Simple and Mighty Arrowhead Detection Technique of BdSL 435from estimates of first and second moments of the gradients. Proposed methodused ADAM Optimizer with learning rate = 0.001.When using a neural network to perform classification and prediction task. Arecent study shows that cross entropy function performs better than classificationerror and mean square error. Cross-entropy error, the weight changes don’t getsmaller and smaller and so training isn’t s likely to stall out. Proposed methodused categorical cross entropy (3) as loss function.Li =ti,j log(pi,j) (3)To make the optimizer converge faster and closer to the global minimum ofthe loss function, using an automatic Learning Rate reduction method. Learn-ing rate is the step by which walks through the minimum loss. If higher learningrate use it will quickly converge and stuck in a local minimum instead of globalminima. To keep the advantage of the fast computation time with a high Learn-ing Rate, after each epoch model dynamically decreases the learning rate bymonitoring the validation accuracy.4 Model EvaluationThe dataset was divided into two portions - training data and test data. Themodel was tarined with the training data and then validated with the validationdata. For Ishara-Lipi sign character database, after 30 epoch model gets 92.65%accuracy on the training set and 92.74% accuracy on the validation set. Figure 6shows the loss value and accuracy of the training set and the validation.Fig. 6. Model evaluation graph.436 Md. S. Islam et al.5 ConclusionDeveloping models that recognize sign from images is a challenging task. Thecapability</s>
<s>of automatically recognize sign language could have a great impressionon the lives of hearing impaired people. This will help them in their daily lifecommunication.In this paper, we represented a convolutional neural network (CNN) approachfor a classification algorithm of Bangla Sign Language. The CNN have fourconvolutional layer which increases the speed and accurateness in recognition.CNN can create outcome in real-time manner and able to recognizing static signlanguage gesture. Here, we introduced a self-made large dataset that includes1800 images of 36 alphabets for the Bangla Sign Language. This dataset isopen for all researcher. We were capable of get an accuracy of 88% for our CNNclassifier. By contributing to the arena of automatic sign language recognition thegoal of our model is to reduce the difficulty of communication between hearingimpaired people and normal people.6 Future WorkStudying the boundary of this completed method like structure classification, amore exact sign recognition system can be exhibited. We will try to establish ourmodel more efficient in future. We experiment for 36 Bengali alphabets and wewill extent the accuracy for all the Bengali alphabets. In future, additional fea-ture like body movements and facial expressions will proposed in BdSL. Enhancethe vocabulary can also be computed as a future work. Our final destination, tobuild model for identify sign of the BdSL and to interpret them to Bangla text.We would like to conduct this model as a standard platform.Acknowledgement. I would like to express my heartiest appreciation to all those whoprovided us the possibility to complete this research under the Daffodil InternationalUniversity. A special gratitude we give to Daffodil International University NLP andMachine Learning Research LAB for their instructions and support. Furthermore, Iwould also like to acknowledge that, this research partially supported by BijaynagarDeaf School, Mirpur Deaf School, Mymensing Deaf School, CDD (Centre for Disabilityin Development) and all of the volunteers team who gave permission to collect valuabledata. Any errors are our own and should not tarnish the reputations of these esteemedpersons.References1. Press CU. Cambridge Dictionary (2017). https://dictionary.cambridge.org/dictionary/english/deaf2. Press CU. Cambridge Dictionary (2017). https://dictionary.cambridge.org/dictionary/english/mute3. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied todocument recognition. Proc. IEEE 86(11), 2278–2324 (1998)https://dictionary.cambridge.org/dictionary/english/deafhttps://dictionary.cambridge.org/dictionary/english/deafhttps://dictionary.cambridge.org/dictionary/english/mutehttps://dictionary.cambridge.org/dictionary/english/muteA Simple and Mighty Arrowhead Detection Technique of BdSL 4374. Agarwal, A., Thakur, M.: Sign language recognition using Microsoft Kinect. In:IEEE International Conference on Contemporary Computing (2013)5. Beena, M.V., Namboodiri, M.N.A.: Automatic sign language finger spelling usingconvolutional neural network: analysis. Int. J. Pure Appl. Math. 177(20), 9–15(2017)6. Kang, B., Tripathi, S., Nguyen, T.Q.: Real time sign language finger-spelling recog-nition using convolutional neural network from depth map. In: 3rd IAPR AsianConference on Pattern Recognition (2015)7. Rao, G.A., Syamala, K., Kishore, P.V.V., Sastry, A.S.C.S.: Deep ConvolutionalNeural Networks for Sign Language Recognition, Department of ECE, KL Deemedto be UNIVERSITY, SPACES-2018 (2018)8. Hosoe, H., Sako, S., Kwolek, B.: Recognition of JSL finger spelling using convolu-tional neural networks. In: 15th IAPR International Conference on Machine VisionApplication (MVA). Nagoya University, Nagoya, 8–12 May 20179. Huang, J., Zhou, W., Li, H., Li, W.: Sign language recognition using 3D convo-lutional neural networks, University of Science and Technology of China,</s>
<s>Hefei,China (2015)10. Mekala, P., Gao, Y., Fan, J., Davari, A.: Real-time sign language recognition basedon neural network architecture. IEEE Conference, April 201111. Pigou, L., Dieleman, S., Kindermans, P.-J., Schrauwen, B.: Sign language recogni-tion using convolutional neural networks. In: Agapito, L., Bronstein, M.M., Rother,C. (eds.) ECCV 2014. LNCS, vol. 8925, pp. 572–578. Springer, Cham (2015).https://doi.org/10.1007/978-3-319-16178-5 4012. Tsai, B.-L., Huang, C.-L.: A vision-based Taiwanese sign language recognitionsystem. In: International Conference of Pattern Recognition (2010)13. Yasir, F., Prasad, P.W.C., Alsadoon, A., Elchouemi, A.: Bangla sign languagerecognition using convolutional neural network. In: International Conference onIntelligent Computing, Instrumentation and Control Technologies (ICICICT)(2017)View publication statsView publication statshttps://doi.org/10.1007/978-3-319-16178-5_40https://www.researchgate.net/publication/334562271 A Simple and Mighty Arrowhead Detection Technique of Bangla Sign Language Characters with CNN 1 Introduction 2 Literature Review 3 Proposed Methodology 3.1 Dataset Properties 3.2 Data Preprocessing 3.3 Model Preparation 3.4 Model Optimization and Learning Rate 4 Model Evaluation 5 Conclusion 6 Future Work References</s>
<s>See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/335359908A Modern Approach for Sign Language Interpretation Using ConvolutionalNeural NetworkChapter · August 2019DOI: 10.1007/978-3-030-29894-4_35CITATIONSREADS2436 authors, including:Some of the authors of this publication are also working on these related projects:Optical Character Recognition View projectFuzzy Logic View projectPias PaulNorth South University2 PUBLICATIONS 3 CITATIONS SEE PROFILENabeel MohammedNorth South University41 PUBLICATIONS 214 CITATIONS SEE PROFILESifat MomenNorth South University37 PUBLICATIONS 146 CITATIONS SEE PROFILEAll content following this page was uploaded by Pias Paul on 07 September 2019.The user has requested enhancement of the downloaded file.https://www.researchgate.net/publication/335359908_A_Modern_Approach_for_Sign_Language_Interpretation_Using_Convolutional_Neural_Network?enrichId=rgreq-2f9050b3d35367321654f21c9b47973f-XXX&enrichSource=Y292ZXJQYWdlOzMzNTM1OTkwODtBUzo4MDAzMTgxODcxMTA0MDBAMTU2NzgyMjEzMzgyOQ%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/335359908_A_Modern_Approach_for_Sign_Language_Interpretation_Using_Convolutional_Neural_Network?enrichId=rgreq-2f9050b3d35367321654f21c9b47973f-XXX&enrichSource=Y292ZXJQYWdlOzMzNTM1OTkwODtBUzo4MDAzMTgxODcxMTA0MDBAMTU2NzgyMjEzMzgyOQ%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Optical-Character-Recognition-17?enrichId=rgreq-2f9050b3d35367321654f21c9b47973f-XXX&enrichSource=Y292ZXJQYWdlOzMzNTM1OTkwODtBUzo4MDAzMTgxODcxMTA0MDBAMTU2NzgyMjEzMzgyOQ%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Fuzzy-Logic-29?enrichId=rgreq-2f9050b3d35367321654f21c9b47973f-XXX&enrichSource=Y292ZXJQYWdlOzMzNTM1OTkwODtBUzo4MDAzMTgxODcxMTA0MDBAMTU2NzgyMjEzMzgyOQ%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-2f9050b3d35367321654f21c9b47973f-XXX&enrichSource=Y292ZXJQYWdlOzMzNTM1OTkwODtBUzo4MDAzMTgxODcxMTA0MDBAMTU2NzgyMjEzMzgyOQ%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Pias_Paul?enrichId=rgreq-2f9050b3d35367321654f21c9b47973f-XXX&enrichSource=Y292ZXJQYWdlOzMzNTM1OTkwODtBUzo4MDAzMTgxODcxMTA0MDBAMTU2NzgyMjEzMzgyOQ%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Pias_Paul?enrichId=rgreq-2f9050b3d35367321654f21c9b47973f-XXX&enrichSource=Y292ZXJQYWdlOzMzNTM1OTkwODtBUzo4MDAzMTgxODcxMTA0MDBAMTU2NzgyMjEzMzgyOQ%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/North_South_University?enrichId=rgreq-2f9050b3d35367321654f21c9b47973f-XXX&enrichSource=Y292ZXJQYWdlOzMzNTM1OTkwODtBUzo4MDAzMTgxODcxMTA0MDBAMTU2NzgyMjEzMzgyOQ%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Pias_Paul?enrichId=rgreq-2f9050b3d35367321654f21c9b47973f-XXX&enrichSource=Y292ZXJQYWdlOzMzNTM1OTkwODtBUzo4MDAzMTgxODcxMTA0MDBAMTU2NzgyMjEzMzgyOQ%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Nabeel_Mohammed7?enrichId=rgreq-2f9050b3d35367321654f21c9b47973f-XXX&enrichSource=Y292ZXJQYWdlOzMzNTM1OTkwODtBUzo4MDAzMTgxODcxMTA0MDBAMTU2NzgyMjEzMzgyOQ%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Nabeel_Mohammed7?enrichId=rgreq-2f9050b3d35367321654f21c9b47973f-XXX&enrichSource=Y292ZXJQYWdlOzMzNTM1OTkwODtBUzo4MDAzMTgxODcxMTA0MDBAMTU2NzgyMjEzMzgyOQ%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/North_South_University?enrichId=rgreq-2f9050b3d35367321654f21c9b47973f-XXX&enrichSource=Y292ZXJQYWdlOzMzNTM1OTkwODtBUzo4MDAzMTgxODcxMTA0MDBAMTU2NzgyMjEzMzgyOQ%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Nabeel_Mohammed7?enrichId=rgreq-2f9050b3d35367321654f21c9b47973f-XXX&enrichSource=Y292ZXJQYWdlOzMzNTM1OTkwODtBUzo4MDAzMTgxODcxMTA0MDBAMTU2NzgyMjEzMzgyOQ%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Sifat_Momen?enrichId=rgreq-2f9050b3d35367321654f21c9b47973f-XXX&enrichSource=Y292ZXJQYWdlOzMzNTM1OTkwODtBUzo4MDAzMTgxODcxMTA0MDBAMTU2NzgyMjEzMzgyOQ%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Sifat_Momen?enrichId=rgreq-2f9050b3d35367321654f21c9b47973f-XXX&enrichSource=Y292ZXJQYWdlOzMzNTM1OTkwODtBUzo4MDAzMTgxODcxMTA0MDBAMTU2NzgyMjEzMzgyOQ%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/North_South_University?enrichId=rgreq-2f9050b3d35367321654f21c9b47973f-XXX&enrichSource=Y292ZXJQYWdlOzMzNTM1OTkwODtBUzo4MDAzMTgxODcxMTA0MDBAMTU2NzgyMjEzMzgyOQ%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Sifat_Momen?enrichId=rgreq-2f9050b3d35367321654f21c9b47973f-XXX&enrichSource=Y292ZXJQYWdlOzMzNTM1OTkwODtBUzo4MDAzMTgxODcxMTA0MDBAMTU2NzgyMjEzMzgyOQ%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Pias_Paul?enrichId=rgreq-2f9050b3d35367321654f21c9b47973f-XXX&enrichSource=Y292ZXJQYWdlOzMzNTM1OTkwODtBUzo4MDAzMTgxODcxMTA0MDBAMTU2NzgyMjEzMzgyOQ%3D%3D&el=1_x_10&_esc=publicationCoverPdfA Modern Approach for Sign LanguageInterpretation Using ConvolutionalNeural NetworkPias Paul(B), Moh. Anwar-Ul-Azim Bhuiya, Md. Ayat Ullah,Molla Nazmus Saqib, Nabeel Mohammed, and Sifat MomenDepartment of Electrical and Computer Engineering, North South University,Plot-15, Block-B, Bashundhara, 1229 Dhaka, Bangladesh{paul.pias,anwar.bhuiyan,ayat.ullah,nazmus.saqib,nabeel.mohammed,sifat.momen}@northsouth.eduAbstract. There are nearly 70 million deaf people in the world. A signif-icant portion of them and their families use sign language as a mediumfor communicating with each other. As automation is being graduallyintroduced to many parts of everyday life, the ability for machines tounderstand the act on sign language will be critical to creating an inclu-sive society. This paper presents multiple convolutional neural networkbased approaches, suitable for fast classification of hand sign characters.We propose two custom convolutional neural network (CNN) based archi-tectures which are able to generalize 24 static American Sign Language(ASL) signs using only convolutional and fully connected layers. We com-pare these networks with transfer learning based approaches, where mul-tiple pre-trained models were utilized. Our models have remarkably out-performed all the preceding models by accomplishing 86.52% and 85.88%accuracy on RGB images of the ASL Finger Spelling dataset.Keywords: Image processing · CNN · Transfer learning · ASLR ·Finger Spelling dataset1 IntroductionA language that needs manual communication and involvement of body languageto convey meaning as opposed to conveyed sound patterns is known as sign lan-guage. This can involve a simultaneous combination of handshapes, orientation,and movement of the hands, arms or body, and different facial expressions tofluidly express a speaker’s thoughts. In some cases, sign language is the onlymethod that is used to communicate with a person with hearing impairment.Sign languages such as the American Sign Language (ASL), British Sign Lan-guage (BSL), Quebec Sign Language (LSQ), Spanish sign language (SSL) differin the way an expression is made. They share many similarities with spokenlanguages, which is why linguists consider sign languages to be a part of naturallanguages.c© Springer Nature Switzerland AG 2019A. C. Nayak and A. Sharma (Eds.): PRICAI 2019, LNAI 11672, pp. 431–444, 2019.https://doi.org/10.1007/978-3-030-29894-4_35http://crossmark.crossref.org/dialog/?doi=10.1007/978-3-030-29894-4_35&domain=pdfhttps://doi.org/10.1007/978-3-030-29894-4_35432 P. Paul et al.Sign Language Recognition (SLR) system which is required to perceivegesture-based communications, has been widely studied for years. It providesa way to help deaf/mute individuals interact easily with technology. However,just like speech recognition, this is not an easy task. However, recent advances incomputer vision, particularly the use of convolutional neural networks (CNN),has created opportunities to create effective solutions to problems previouslythought to be almost unattainable. In this paper, we present multiple CNN-based models to classify 24 characters from the ASL Finger Spelling Dataset.We present models which were custom made for</s>
<s>this problem, as well as mod-els which leverage transfer learning. One of our custom models achieved a testaccuracy of 86.52%, which is better than the current best published result.2 Related WorksIn 2013 Pugeault and Bowden [18] proposed an interactive keyboard-less graph-ical user interface that can detect hand shapes in real time. In that work, theyused a Microsoft Kinect device for collecting both appearance and depth images,OpenNI+NITE framework for hand detection and tracking, features based onGabor filters and a Random Forest classifier for classification. From the ASLdataset, which was also proposed in that paper, they have ignored and dis-carded images of letter j and z since both of these letters require motion andused leftover 48000 images; 50% for training and 50% for validation. Using bothappearance and depth images together brought them better classification resultcompared to the usage of appearance and depth information separately.Tripathi et al. have proposed a continuous hand gesture recognition system[29]. In their approach, keyframes were extracted using gradient-based methodsand HoG features were used for actual feature extraction. For classification, sev-eral distance metrics were used including City Block, Mahalanobis, Chess Board,Cosine, etc. They created a dataset using 10 sentences signaled by 5 differentpeople. They found that using a higher number of bins for HoG resulted in bet-ter performance and the best performance was found when Euclidean distanceemployed.Masood et al. [15] proposed a method to bridge the gap for the people whodo not know and want to communicate using sign languages through isolatedsign language recognition using methods based on computer vision. They usedan Argentinean dataset (LSA) with 2300 video samples and substantial ges-ture variation with 46 categories. Their model used the Inception-v3 pre-trainedCNN, and combined with the use of Long Short Term Memory (LSTM) forsequence predictions. They tried 3 models such as a single layer of 256 LSTMunits, a wider Recurrent neural network(RNN) network with 512 LSTM units,a deep RNN network consisting of 3 layers with each 64 LSTM units. Empiri-cally, they found the model with 256 LSTM units gave the best performance.Two approaches were taken for training, one was a prediction approach in whichpredictions of frames made by CNN were fed as input to the LSTM. In the otherA Modern Approach for Sign Language Interpretation Using CNN 433approach, the output of the pooling layers was directly fed into the LSTM. Thesecond approach gave a better result with an accuracy of 95.2%.3 Experimental SetupThis section provides details of the setup used for the experiments performed.We initially present the dataset on which we will train and compare the differentmodels. This is followed by a brief description of the data preprocessing and par-titioning. The proposed models are discussed next, which includes descriptionsof the custom models as well as the transfer learning techniques.3.1 DatasetThe work is based on ASL Finger Spelling dataset that consists of images whichwere obtained from 5 different users. In the proposed dataset [18], images wereobtained in 2 different ways, each user was asked to perform 24 ASL staticsigns which were captured in both color and depth format. There are</s>
<s>a total of131,670 number of images where 65,774 images have RGB channels and restare depth images that contain the intensity values in the image which representthe distance of the object or simply depth from a viewpoint. The reason behindchoosing American Sign Language (ASL) for this work was that ASL is widelylearned as a second language and the dataset contains sign from only using onehand which reduces the task of over-complicated feature extraction. Here, thedataset comprises 24 static signs which have similar lighting and backgroundexcluding the letters j and z since these 2 letters require dictionary lookup andinvolve motion (Table 1).Table 1. Types of images collected from each userUser Image typeRGB Depth+RGBA 12,547 25,118B 13,898 27,820C 13,393 26,810D 13,154 26,332E 12,782 25,5903.2 Data Preprocessing and Feature ExtractionFrom the total of 5 user samples, 4 were considered in such a way that theproposed dataset [18] was divided into two parts. First part is Dataset-A whichcontains only color images and the other one is Dataset-B which contains both434 P. Paul et al.Table 2. Preparing the datasetImage type Training set Validation set LabelRGB 26,547 26,445 DataSet-ADepth+RGB 53,142 52,938 DataSet-Bdepth and color images. This is shown in Table 2. In both the DataSet-A andDataSet-B, images from users C and D were used as the training set and imagesfrom user A and B were used to make validation/test set. As the images were ofdifferent sizes, all of them were re-sized to 200×200 pixels. Pixel color values werere-scaled between 0 and 1 and then each image was normalized by subtractingthe mean (Fig. 1).Fig. 1. Illustration on the variety of the dataset where each column represents imagesof individual letters that has been collected from 4 different users.To increase the amount of training data, each training image was augmentedusing the transformations mentioned in Table 3. The augmentations were appliedsingle (not compositionally) and were only applied to RGB images. The valida-tion data were not augmented per say, but were modified.Table 3. Augmentation techniques applied on Dataset-A and Dataset-BTraining data set Validation data setArguments Parameters Arguments ParametersRescale 1./255 Rescale 1./255Center-Cropped True Center-Cropped TrueShear Range 0.2 DegreeZoom range 0.1Random Rotation 20 DegreeHorizontal Flip TrueHeight Shift Range 0.1Width Shift Range 0.1Fill Mode NearestA Modern Approach for Sign Language Interpretation Using CNN 4353.3 The Proposed ArchitectureTable 4 shows the details of the two custom models used for comparison. Bothmodels were trained and tested on DataSet-A and DataSet-B. For Custom-Model-A, conv3-32 means respective field size 3 and number of channels 64.The images were resized to 128 × 128 dimension which will go through the con-volutional layers. The model uses LeakyReLU as activation function. We foundLeakyReLU to work better than RelU for this model after experimentation.Apart from using max pooling, Global Average Pooling (GAP) was also used todownsample the input dimension from each layer using a 2 × 2 window. Afterflattening the output of the last pooling layer was passed through four fully con-nected layers with the final layer having 24 neurons for the 24 classes. The lastlayer also uses the softmax activation function.In custom-Model-B, 2×2</s>
<s>strided convolution [27] was used to reduce the sizeof the output feature maps instead of the more commonly used pooling tech-niques. Surprisingly, for this model, our tests showed better performance usingthe RelU activation function (an investigation looking into the discrepancy iscurrently under progress and will be reported in a later paper). Batch normal-ization was also used in this model. This model also flattens the output of thelast convolutional layer and forwards the output to four fully connected layers,although the configurations are slightly different from Custom-Model-A.3.4 Transfer Learning Using Pre-trained ModelsApart from our custom models, we have also experimented using Transfer Learn-ing which leverages the weights or filters of a pre-trained model on a new problemas in the case of most real-world problems when there are insufficient data pointsto train complex models. The premise is if knowledge from an already trainedmachine learning model is applied to a different but related problem, it mayfacilitate the learning process as the model is already trained to identify somepotentially useful features.Figure 2 shows the overall strategy used for transfer learning. This methodis one which has been used in many different tasks, where the softmax layer ofthe original pre-trained model is discarded and replaced by a new classificationlayer with random weights. All layers except this new one are frozen and then thenewly crafted model is trained until the random weights change to be compatiblewith the rest of the model. Then the frozen layers are unfrozen and the entiremodel is trained.For this work, we experimented with five different models all pre-trained onthe ImageNet dataset. These are MobileNetV2, NASNetMobile, DenseNet21,VGG16 and VGG19.3.5 Training DetailsWe arrived at a set of hyper parameters which worked well through experimen-tation. Table 5 summarizes this information.436 P. Paul et al.Table 4. Configuration of the customized modelsConfiguration of the proposed architecturesCustom-Model-A Custom-Model-Binput ( 128 × 128image) input (200 × 200 image)conv3-32. LeakyReLUconv3-32. LeakyReLUconv3-32. LeakyReLUconv3-64Sequential (conv3-64.ReLU, BatchNormconv3-64.Relu, BatchNorm)MaxPool (stride=2) conv3-64 (stride = 2)Dropout(0.6)conv3-32. LeakyReLUconv3-32. LeakyReLUconv3-32. LeakyReLUSequential (conv3-64.ReLU, BatchNormconv3-64.Relu, BatchNorm)Sequential (conv3-64.ReLU, BatchNorm)MaxPool (stride=2) conv3-64 (stride = 2)DepthwiseConv3. LeakyReLUBatchNormalizationconv3-32. LeakyReLUBatchNormalizationSequential (conv3-64.ReLU, BatchNorm,conv3-64.Relu, BatchNorm)Sequential (conv3-64.ReLU, BatchNorm,conv3-64.Relu, BatchNorm)Dropout(0.6)conv3-64. LeakyReLUconv3-64. LeakyReLUconv3-64. LeakyReLUconv3-64. LeakyReLUconv3-64 (stride = 2)Sequential (conv3-64.ReLU, BatchNorm,conv3-64.Relu, BatchNorm)Sequential (conv3-64.ReLU, BatchNorm,conv3-64.Relu, BatchNorm)MaxPool (stride=2) conv3-64 (stride = 2)conv3-64 (stride = 2)conv3-64. LeakyReLUconv3-64. LeakyReLUconv3-64. LeakyReLUconv3-64. LeakyReLUSequential (conv3-64.ReLU, BatchNorm,conv3-64.Relu, BatchNorm)Sequential (conv3-64.ReLU, BatchNorm,conv3-64.Relu, BatchNorm)GlobalAveragePoolingMaxPool (stride=2)conv3-64 (stride = 2)FC-1024Dropout(0.4)ReLUDropout(0.6)FC-576ReLUFC-576Dropout(0.4)ReLUDropout(0.6)FC-256ReLUFC-256Dropout(0.4)ReLUDropout(0.6)FC-128ReLUFC-128Dropout(0.4)ReLUDropout(0.6)FC-64ReLUFC-24, softmax FC-24, softmaxThe loss function of choice was categorical cross entropy as shown in Eq. 1,which measures the classification error as a cross entropy loss when multiplecategories are in use. Here, the double sum is over the observations i, whosenumber is N , and the categories c, whose number is C and the term 1yi∈CcA Modern Approach for Sign Language Interpretation Using CNN 437Table 5. Training detailsTraining detailsBatch Size 64Input size 200*200*3Learning Rate 0.001Optimizer AdamLoss Function Categorical CrossentropyEpoch 25Fig. 2. The proposed transfer learning processis the indicator function of the ith observation belonging to the cth category.Finally, the probability predicted by the model for the ith observation belongsto which of the cth category is determined by Pmodel[yi ∈ Cc].− 1i=1c=11yiεCclogPmodel[yiεCc] (1)For this work the base learning</s>