text
stringlengths 1
6.06k
| source
stringclasses 375
values | page
int64 0
1.49k
| book
stringclasses 375
values | chunk_index
int64 0
0
|
|---|---|---|---|---|
Speech Recognition 5 the acoustic data. In essentially all current ASR systems, an intermediate goal for the recognizer is to determine the likelihood of possible speech sound state sequences, where these states typically correspond to a part of a phoneme (although in some small vocabulary systems the states are components of words, and no phoneme representations are used). Input to the recognition process is a digitized version of the electric signal from the microphone that represents changes in acoustic pressure at some distance from the mouth of the speaker of the message. As dis- cussed later, the signal typically undergoes further information-reducing transformations before being used in the evaluation of the likelihood of each possible sub-phonetic state sequence, leading to the likelihood of each possible word sequence. In general, the outcome of the recognition process is a list of hypo- thesized word sequences, where words here mean some tokens from a pre-defined recognition lexicon. The hypothesis is generated by a de- cision process which aims to produce a most likely word sequence W given the input signal X. This can be expressed mathematically by the so-called Maximum a Posteriori (MAP) decision rule (using the concept of conditional probability) as W = argmax Wi P(Wi|X, Θ) (1.1) where Wi represents the i-th word sequence from the lexicon and the conditional probability is evaluated over all possible sequences from the lexicon, and Θ represents the set of parameters used to estimate the probability distribution. The word sequences Wi are not represented by the acoustic signals but are actually represented by their models M(Wi), also written Mi for simplicity sake, thus yielding : M = argmax Mi P(Mi|X, Θ) (1.2) where M is a sequence of symbols (words) representing the linguistic message in speech, Mi is a model of the (word) sequence Wi, P(Mi) is the posterior probability of the model given the acoustic input, and the maximum is evaluated over all possible models (i.e., all possible word sequences). Bayes’ rule can be the applied to (1.2
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 6
|
3 ppur_asr
| 0
|
) is the posterior probability of the model given the acoustic input, and the maximum is evaluated over all possible models (i.e., all possible word sequences). Bayes’ rule can be the applied to (1.2), yielding : P(Mi|X) = P(X|Mi, Θ)P(Mi|Θ) P(X|Θ) (1.3)
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 6
|
3 ppur_asr
| 0
|
6 ASR statistical formalism where P(X|Mi, Θ) represents the contribution of the acoustic model (i.e., the likelihood that a specific model Mi has produced the acoustic se- quence X), and P(Mi|Θ) represents the contribution of the language model (i.e., the probability of the associated word sequence). For the sake of simplicity (and tractability of the parameter estimation process), state-of-the-art ASR systems always assume independence between the acoustic model parameters, which will now be denoted ΘA and the pa- rameters of the language model, which will now be denoted ΘL. Based on the above, we thus have to address the following problems : – Decoding (recognition) : Given an unknown utterance X, find the most probable word sequence W (i.e., the most probable model M) such that : M = argmax Mi P(X|Mi|ΘA)P(Mi|ΘL) P(X|ΘA, ΘL (1.4) Since during recognition all paramaters ΘA and ΘL are frozen, pro- bability P(X|ΘA, ΘL) = P(X) is the same for all choices of i and can thus be ignore, and (1.4) then simplifies : M = argmax Mi [P(X|Mi, ΘA)P(Mi|ΘL)] (1.5) – Acoustic modeling : The acoustic modeling refers to the estima- tion of P(X|Mi, ΘA), and is typically using Hidden Markov Models (HMM), Artificial Neural Networks (ANNs), or hybrid HMM/ANN systems, which will be described in Section 1.4. At training time, a large amount of training utterances Xj (j = 1, . . . , J) with their associated models Mj are used to estimate the optimal acoustic parameter set Θ∗ A, such that : Θ∗ A = argmax ΘA J Y j=1 P(Xj|Mj, ΘA) (1.6) = argmax ΘA J X j=1 log P(Xj|Mj, ΘA) (1.7) where
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 7
|
3 ppur_asr
| 0
|
∗ A = argmax ΘA J Y j=1 P(Xj|Mj, ΘA) (1.6) = argmax ΘA J X j=1 log P(Xj|Mj, ΘA) (1.7) where (1.6)is referred to as the Maximum Likelihood (ML) criterion, and (1.7) as the log likelihood criterion (1). (1)Although both criteria are equivalent, it is usually more convenient to work with with sum of log likelihoods.
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 7
|
3 ppur_asr
| 0
|
Speech Recognition 7 Ideally, the training database and parameter optimization process should target the minimization of the error rate on an independent test set, which is of course not guaranteed by ML training (see further discussion about this in Section 1.3). – Language modeling : The goal of the language model is to estimate prior probabilities of sentence models P(Mi|ΘL). At training time, ΘA is learned separately from the acoustic model, which is sub-optimal (see [23]) but convenient. These language mo- del parameters are commonly estimated from large text corpora or from a given finite state automaton from which N-grams (i.e., the probability of a word given the (N-1) preceding words) are extrac- ted. Typically, only bi-grams and tri-grams are currently used. 1.3 Pattern classification with realistic data The distinction between training and testing data is a critical one for pattern classification. In some simple examples of function learning, a relationship that could be explicitly stated for the complete set of possible feature vectors is learned. However, in classification problems we often have a huge or even infinite number of potential feature vectors, and the system must be trained using a very limited subset that has been labeled with class identity. Thus, even though the relationship between feature vector and class might be learned perfectly for some training set of data, likelihoods or posteriors may not be well-estimated for the general population of possible samples. In general, classification error on the training set patterns should be viewed as a lower bound. A better estimate of the classifier error is ob- tained using an independent test set. The larger this test set, the better the representation of the general population of possible test patterns. Conventional significance tests (e.g., a normal approximation to the bi- nomial distribution for correctness on the test set) should be done as a sanity check. For instance, a 49% error on a million-pattern test set is significantly different from chance (50% error) on a two-class pro- blem ; it represents 10,000 patterns. On the other hand, the same error percentage on a 100-pattern test set is indistinguishable from chance performance. (2) One way to effectively increase the size of a
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 8
|
3 ppur_asr
| 0
|
two-class pro- blem ; it represents 10,000 patterns. On the other hand, the same error percentage on a 100-pattern test set is indistinguishable from chance performance. (2) One way to effectively increase the size of a test set is to (2)For the normal approximation to a binomial distribution, the equivalent standard deviation is √npq, where n is the number of patterns, p is the probability of getting the class correct by chance, and q is the probability of getting the class
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 8
|
3 ppur_asr
| 0
|
8 Pattern classification with realistic data use a “jackknife” procedure, in which each split of the data (e.g., fifths) is used in turn for test after using the remaining part for training. Thus, all of the available data is ultimately used for the test set. Training set size is also a major concern for real problems. The lar- ger the training set, the better the classifier will do on representing the underlying distributions. Also, the more complex the recognizer (e.g., the larger the number of independent parameters), the greater the risk of over-specializing to the training sample, and generalizing poorly to independent test sets. Both of these factors push pattern recognition researchers toward using as much training data as possible. Unfortuna- tely, in most practical situations, there are strong limits on the available data. In this case, clever approaches to squeezing out information from limited data are essential, whether they are based on domain-specific knowledge, or on general constraints that diminish the tendency for the classifier to “over-learn” the training set. (3) Another difficulty associated with the finite size of the training set concerns the discriminant character of a statistical classifier. As noted in the previous section, likelihoods and posteriors are simply transformable between one another (by Bayes’ rule). However, in the case of probabi- lity estimates (as opposed to true probabilities), the training criterion will affect results. For instance, if probabilities are estimated in a pro- cedure that attempts to maximize the discrimination between classes, the classification error will be minimized. This might actually produce poorer estimates of the likelihoods than would be observed after training by a ML criterion that attempts to best model the underlying densities. This would not be the case for an infinite amount of training data, for which the estimates could converge to the true underlying distributions, so that Bayes’ rule would literally be satisfied and the criteria would be equivalent. In general, making strong assumptions about the data improves the quality of the estimate – if the assumption is correct. To the extent that these assumptions are wrong, the resulting estimates are poor. Ap
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 9
|
3 ppur_asr
| 0
|
criteria would be equivalent. In general, making strong assumptions about the data improves the quality of the estimate – if the assumption is correct. To the extent that these assumptions are wrong, the resulting estimates are poor. Ap- proaches such as EM estimation of Gaussian mixtures and gradient- based neural network training incorporate successively weaker constraints, incorrect by chance. For the examples above, this would be 500 and 5 patterns, respectively. (3)In [3, 18], we describe such a general form of training constraint, called early stop- ping or cross-validation. In this approach, training parameters are modified based on performance on an independent data set, so that generalization performance is optimized.
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 9
|
3 ppur_asr
| 0
|
Speech Recognition 9 and thus have largely supplanted the earlier, simpler models (e.g., one Gaussian per class). Speech recognition requires the incorporation of “soft” classification decisions as part of the more global decision about the utterance. This requires a dynamic model that represents sequences, rather than the static pattern classification described in this section. A general statistical framework for this more general case is briefly presented next. 1.4 Finite State Automata During these last 20 years, Finite State Automata (FSA), and more particularly Stochastic Finite State Automata (SFSA) and different va- riants of Hidden Markov Models (HMMs), have been used quite success- fully to address several complex sequential pattern recognition problems, such as continuous speech recognition, cursive (handwritten) text recog- nition, time series prediction, biological sequence analysis, and many others. FSA allow complex learning problems to be solved by assuming that the sequential pattern can be decomposed into piecewise stationary seg- ments, encoded through the topology of the FSA. Each stationary seg- ment can be parameterized in terms of a deterministic or stochastic function. In the latter case, it may also be possible that the SFSA state sequence is not observed directly but is a probabilistic function of the underlying finite state Markov chain. This thus yields to the definition of the powerful Hidden Markov Models, involving two concurrent sto- chastic processes : the sequence of HMM states modelling the sequential structure of the data, and a set of state output processes modelling the (local) stationary character of the data. The HMM is called “hidden” because there is an underlying stochastic process (i.e., the sequence of states) that is not observable, but affects the observed sequence of events. Furthermore, depending on the way the SFSA is parameterized, and the way it is trained, SFSA (and HMM in particular) can be used as a production model (where the observation sequence is considered as an output signal being produced by the model) or as a recognition model (acceptor) (where the
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 10
|
3 ppur_asr
| 0
|
ed, SFSA (and HMM in particular) can be used as a production model (where the observation sequence is considered as an output signal being produced by the model) or as a recognition model (acceptor) (where the observation sequence is considered as being ac- cepted by the model). Finally, it may also be the case that the HMM is used to explicitly model the stochastic relationship between two (input and output) event sequences, then yielding to a model usually referred to as input/output HMM.
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 10
|
3 ppur_asr
| 0
|
10 Finite State Automata The parameters of these models can be trained by different variants of the powerful Expectation-Maximization (EM) algorithm [1, 15], which, depending on the criterion being used, is referred to as Maximum Li- kelihood (ML) or Maximum A Posteriori (MAP) training. However, al- though being part of the same family of models, all these models exhibit different properties. In the next section, we briefly discuss and compare some of the variants of these powerful SFSA and HMM models currently used for sequence processing. 1.4.1 Deterministic finite state automata In its more general form [10], and as summarized in Table 1, a Finite State Automata (FSA), which will be denoted M in this paper, is defined as an abstract machine consisting of : – A set of states Q = {qI, q1, . . . , qk, . . . , qK, qF }, including the initial state qI and final state qF , also referred to as accepting state (in the case of recognizers). Variants of this include machines having multiple initial states and multiple accepting states. Here, a specific state visited at time n will be denoted sn ∈Q, and a state sequence of length N will be denoted S = {s1, s2, . . . , sn, . . . , sN}. – A set Y of (discrete or continuous) input symbols or vectors. A par- ticular sequence of size N of input symbols/vectors will be denoted Y = {y1, y2, . . . , yn, . . . , yN} = Y N 1 , where yn represents the input symbol/vector at time n, and Y n m = {ym, ym+1, . . . , yn}. – A set Z of (continuous or discrete) output symbols or vectors. A particular sequence of size N of output symbols/vectors will be denoted Z = ZN 1 = {z1, z2, .
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 11
|
3 ppur_asr
| 0
|
of (continuous or discrete) output symbols or vectors. A particular sequence of size N of output symbols/vectors will be denoted Z = ZN 1 = {z1, z2, . . . zn, . . . , zN}, where zn represents the output symbol/vector at time n. – A state transition function sn = f(yn, sn−1), which takes the current input event yn and the previous state sn−1 and returns the next state sn. – An emission function zn = g(sn, sn−1), which takes the current state sn and the previous state sn−1 and returns an output event zn. This automaton is usually known as a Mealy FSA, i.e., producing an output for each transition. As a variant of this, the emission function of a Moore FSA depends only on the current state, i.e., zn = g(sn), thus producing an output for each visited state. It can however be shown that there is a homomorphic equivalence between Mealy and Moore automata, provided an increase and renaming of the states. Finally, in the case of sequential pattern processing, the processed se- quence is often represented as an observed sequence of symbols or vectors
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 11
|
3 ppur_asr
| 0
|
Speech Recognition 11 which, depending on the type of automata and optimization criterion, will sometimes be considered as input or output events. To accommo- date this flexibility in the sequel of the present paper, we thus also define the observed sequence of size N as X = XN 1 = {x1, x2, . . . , xn, . . . , xN}, where xn is the observed event/vector at time n. For example, in the case of speech recognition, xn will be the acoustic vector resulting of the spectral analysis of the signal at time n, and is equivalent to zn (since in that case the observations are the outputs of the FSA. (4)) A deterministic FSA is one where the transition and emission func- tions f(.) and g(.) are deterministic, implying that the output event and next state are uniquely determined by a single input event (i.e., there is exactly one transition for each given input event and state). (5) It is not the goal of the present book to further discuss deterministic FSA, which have been largely used in language theory [10] where FSA are often used to accept or reject a language, i.e., certain sequences of input events. Many training approaches have been developed, mainly ai- ming at automatically inferring the FSA topology from a set of observa- tion sequences. However, these often depend on the assumed properties (grammar) of the sequences to be processed, and finding the minimum FSA (minimizing the number of states and transitions) is often an open issue. 1.4.2 Stochastic finite state automata and Markov models A Stochastic FSA (SFSA) is a Finite State Automaton where the transition and/or emission functions are probabilistic functions. In the case of Markov Models, there is a one-to-one relationship between the observation and the state, and the transition function is probabilistic. In the case of Hidden Markov Models, the emission function is also probabilistic, and the states are no longer directly observable through the input events. Instead, each state produces one of the possible output events with a certain probabil
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 12
|
3 ppur_asr
| 0
|
In the case of Hidden Markov Models, the emission function is also probabilistic, and the states are no longer directly observable through the input events. Instead, each state produces one of the possible output events with a certain probability. Depending on their structure (as discussed below), transition and emission (probability density) functions are thus represented in terms of a set of parameters Θ, which will have to be estimated on representative (4)Remembering here that a HMM is a generative model. (5)A non-deterministic FSA is one where the next state depends not only on the current input event, but also on a number of subsequent input events. However, it is often possible to transform a non-deterministic FSA into a deterministic FSA, at the cost of a significant increase of the possible number of input symbols.
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 12
|
3 ppur_asr
| 0
|
12 Finite State Automata Deterministic Stochastic FSA Finite State Automata (SFA) Markov Model HMM HMM/ANN States qk ∈Q xn = qk ∈Q qk ∈Q qk ∈Q Input symbols yn ∈Y — — xn = yn ∈Y Output symbols zn ∈Z — xn = zn ∈Z — Transition law sn = f(yn, sn−1) trans. probabs. trans. probabs. cond. trans. probabs. P(xn|xn−1) P(sn|sn−1) P(sn|xn, sn−1) Emission law Mealy zn = g(sn, sn−1) — emission on transition — xn = g(sn, sn−1) P(xn|sn−1, sn) Moore zn = g(sn) — emission on state — xn = g(sn) P(xn|sn) Training Many ; Relative EM REMAP Methodology often based on Counts Viterbi (GEM) heuristics (incl. smoothing) Training Deterministic Maximum Maximum Maximum Criterion Likelihood Likelihood A Posteriori Tab. 1.1 Classification and relationships between the dif- ferent finite state automata that can be used to process sequences.
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 13
|
3 ppur_asr
| 0
|
Speech Recognition 13 training data. If X represents the whole sequence of training data, and M its associated SFSA, the estimation of optimal parameter set Θ∗is usually achieved by optimizing a Maximum Likelihood (ML) criterion : Θ∗= argmax Θ P(X|M, Θ) (1.8) or a Maximum A Posteriori (MAP) criterion, which could be either Θ∗= argmax Θ P(M|X, Θ) = argmax Θ P(X|M, Θ)P(M|Θ) (1.9) or Θ∗= argmax Θ P(M, Θ|X) = argmax Θ P(X|M, Θ)P(M, Θ) (1.10) In the first case, we take into account the prior distribution of the model M while in the second case, we take into account the prior distribution of the model M as well as of the parameters Θ. The simplest form of SFSA, is a Markov model where states are di- rectly associated with the observations. We are interested in modelling P(X) = P(F|XN 1 )P(x1|sI) N Y n=2 P(xn|Xn−1 1 , sI) which can be simplified, using the kth order Markov assumption by P(X) = P(F|XT N−k+1)P(x1|sI) T Y n=2 P(xn|Xt−1 n−k) which leads to the simplest case of the first-order Markov model, P(X) = P(F|xN)P(x1|I) T Y n=2 P(xn|xn−1) where P(x1|I) is the initial state probability and the other factors can be seen as a transition probabilities. Note that any kth-order Markov model can be expressed as a first-order Markov model, at the cost of possibly exponentially more states. Note also that the transition probabilities are time invariant, i.e. P(xn=ql|xn−1=qk) are independent of time n and will sometime simply be written as P(ql|qk).
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 14
|
3 ppur_asr
| 0
|
14 Finite State Automata The set of parameters, represented by the (K × K)-transition proba- bility matrix, i.e. Θ = P(xn=ql|xn−1=qk), ∀ql, qk ∈Q is then directly estimated on a large amount of possible observation (and, thus, state) sequences such that : Θ∗= argmax Θ P(X|M, Θ) and simply amounts to estimating the relative counts of observed tran- sitions, possibly smoothed in the case of undersampled training data, i.e. : P(xn=ql|xn−1=qk) = nkl nk where nklstands for the number of times a transition from state qk to state qloccurred, while nk represents the number of times state qk was visited. It is sometimes desirable to compute the probability to go from the initial state qI to the final state qF in exactly N steps, which could naively be estimated by summing path likelihoods over all possible paths of length N in model M, i.e. P(F|I) = P(x1|I) X paths P(F|xN) N Y n=2 P(xn|xn−1) although there is a possibly exponential number of paths to explore. Fortunately, a more tractable solution exists, using the intermediate va- riable αn(l) = P(xn=ql, Xn−1 1 , I) which can be computed using the forward recurrence : αn(l) = X k αn−1(k)p(xn=ql|xn−1=qk) (1.11) and can be used as followed P(F|I) = αN+1(F) Replacing the sum operator in (1.11) by the max operator is equiva- lent to finding the most probable path of length N between I and F,
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 15
|
3 ppur_asr
| 0
|
Speech Recognition 15 resulting in the standard Viterbi algorithm [26], which can be shown to be a variant of dynamic programming. Although quite simple, Markov models have numerous applications. For example, as already introduced in Section 1.2, they are used in all state-of-the-art continuous speech recognition systems to represent sta- tistical grammars [11], usually referred to as N-grams, and estimating the probability of a sequence of L words P(W L 1 ) ≈ L Y l=N+1 P(wl|wl−1 l−N) which is equivalent to assuming that possible word sequences can be modelled by a Markov model of order N. 1.4.3 Hidden Markov models (HMM) In many sequential pattern processing/classification problems (such as speech recognition and cursive handwriting recognition), one of the greatest difficulties is to simultaneously model the inherent statistical variations in sequential rates and feature characteristics. In this respect, Hidden Markov Models (HMMs) have been one of the most successful approaches used so far. As presented in Table 1, an HMM is a particular form of SFSA where Markov models (modelling the sequential properties of the data) are complemented by a second stochastic process modelling the local properties of the data. The HMM is called“hidden”because there is an underlying stochastic process (i.e., the sequence of states) that is not observable, but affects the observed sequence of events. Although sequential signals, such as speech and handwriting, are non-stationary processes, HMMs thus assume that the sequence of ob- servation vectors is a piecewise stationary process. That is, a sequence X = xN 1 is modelled as a sequence of discrete stationary states Q = {q1, . . . , qk, . . . , qK}, with instantaneous transitions between these states. In this case, a HMM is defined as a stochastic finite state automata with a particular (generally strictly left-to-right for speech data) topology. An example of a simple HMM is given in
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 16
|
3 ppur_asr
| 0
|
In this case, a HMM is defined as a stochastic finite state automata with a particular (generally strictly left-to-right for speech data) topology. An example of a simple HMM is given in Figure 1.2. In speech recognition, this could be the model of a word or phoneme which is assumed to be composed of three stationary parts. In cursive handwriting recognition, this could be the model of a letter. Once the topology of the HMM has been defined (usually “arbitrari- ly” or based on a priori knowledge), the main criterion used for training
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 16
|
3 ppur_asr
| 0
|
16 Finite State Automata P(sn = qi|sn−1 = qi) p(xn|sn = qi) p(xn|sn = qj) p(xn|sn = qk) sn = qi sn = qj sn = qk P(sn = qk|sn−1 = qk) P(sn = qj|sn−1 = qj) xn xn xn P(sn = qj|sn−1 = qi) P(sn = qk|sn−1 = qj) Fig. 1.2 A schematic of a three state, left-to-right hidden Markov model (HMM). and decoding is based on the likelihood P(X|M, Θ), i.e., the probability that the observed vector sequence X was produced by Markov model M. In this case, the HMM is thus considered as a production model, and the observation vectors xn are thus considered as output variables zn of the HMM. It can be shown that, provided several assumptions [3], the likeli- hood P(X|M, Θ) can be expressed and computed in terms of transition probabilities P(sn=ql|sn−1=qk, Θ) and emission probabilities, which can be of the Mealy type (emission on transitions) P(xn|sn, sn−1, Θ) or of the Moore type (emission on states) P(xn|sn, Θ). In the case of multivariate continuous observations, these emission probabilities are estimated by assuming that they follow a particular functional distribution, usually (mixtures of) multivariate Gaussian densities. In this case, the set of pa- rameters Θ comprises all the Gaussian means and variances, mixing coef- ficients, and transition probabilities. These parameters are then usually trained according to the maximum likelihood criterion (1.8), resulting in the efficient Expectation-Maximization (EM) algorithm [8, 15]. In the rest of this chapter, and to make notation more compact, we will often write the event {sn = q
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 17
|
3 ppur_asr
| 0
|
1.8), resulting in the efficient Expectation-Maximization (EM) algorithm [8, 15]. In the rest of this chapter, and to make notation more compact, we will often write the event {sn = qk} (i.e., state qk visited at time n) as qn k. Full likelihood Given this formalism, the likelihood P(X|M) of an observation se- quence X given the model M can be calculated by extending the forward recurrence (1.11) defined for Markov models to also include the emission
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 17
|
3 ppur_asr
| 0
|
Speech Recognition 17 probabilities. As already done previously, the “full likelihood” P(X|M) could be estimated by summing over all possible paths Q of length N in M, i.e. : P(X|M) = X Q∈M P(X, Q|M) (1.12) which would require roughly 2N ×LN arithmetic operations. (6) However, since events {sn=ql}, i.e. qn l, are mutually exclusive, P(X|M) can also be written as : P(X|M) = L X l=1 P(qn l, X|M), ∀n ∈[1, N] (1.13) where each term of the sum expresses the probability that X has been generated by M while visiting state qlat time n, and can be further decomposed (assuming conditional independent of the observations xn) as : P(qn l, X|M) = P(qn l, Xn 1 |M)P(XN n+1|qn l, Xn 1 , M) (1.14) where Xn m represents the observation sub-sequence {xm, ..., xn} The “full likelihood” P(X|M) can thus be expressed as the product of two probabilities : – Forward probabilities : αn(l|M) = P(qn l, Xn 1 |M) representing the probability that model M has already generated the sub-sequence Xn 1 while being in state qlat time n. l’instant n. Without any assumptions, this probability can be rewritten as : P(qn l, Xn 1 |M) = L X k=1 P(qn−1 k , qn l, Xn−1 1 , xn|M) = L X k=1 P(qn−1 k , Xn−1 1 |M)P(qn l, xn|qn−1 k , Xn−1 1 , M) and can thus be estimated through the forward recurrence : αn(l|M) = X k αn−1(k|M)P(qn l, xn
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 18
|
3 ppur_asr
| 0
|
Xn−1 1 , M) and can thus be estimated through the forward recurrence : αn(l|M) = X k αn−1(k|M)P(qn l, xn|qn−1 k , Xn−1 1 , M) (1.15) (6)Since there are LN possible state sequences and each state sequence requires ap- proximately 2N calculations.
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 18
|
3 ppur_asr
| 0
|
18 Finite State Automata where the sum over k extends over the set of all possible predecessor states qk of ql. Initialization is given by : α1(l|M) = PIl(M) where Π={PIl(M), ∀l= 1, . . . , L} represents the initial state dis- tribution for model M. Assuming a Moore automaton (emission on states), and a first-order Markov model, the forward recurrences become : αn(l|M) = P(xn|qn l) X k αn−1(k)p(qn l|qn k) (1.16) which will be applied over all possible n and l, until we reach the final state qF and n = N + 1, resulting in : P(X|M, Θ) = αN+1(F|M) (1.17) – Backward probabilities : βn(l|M) = P(XN n+1|qn l, Xn 1 , M) (1.18) representing the probability that model M will generate the rest of the sequence XN n+1, starting from the current state sn = qlat time n. Using the same hypotheses, this probability can also be estimated by using the backward recurrence : βn(l|M) = P(XN n+1|qn l, Xn 1 , M) = X k βn+1(k|M)P(qn+1 k |qn l, M)P(xn+1|qk)(1.19) where the sum over k extends over all the possible successor states of ql. Note that this equation has the same form as the forward recursion, but it proceeds backward in time. Initialization of this recurrence is given by : βN(l|M) = PlF (M) where {PlF (M)} represents the probability to go to the final state qF from ql. Given (1.13) and the definition of α, we thus have : P(X|M) = L X l=1 P(qN l, XN 1 |M) = L X l=1 αN(l|M) (1
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 19
|
3 ppur_asr
| 0
|
.13) and the definition of α, we thus have : P(X|M) = L X l=1 P(qN l, XN 1 |M) = L X l=1 αN(l|M) (1.20)
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 19
|
3 ppur_asr
| 0
|
Speech Recognition 19 where the sum usually applies to the states defined as possible final states of M. The full likelihood P(X|M) can thus be estimated as : P(X|M) = L X l=1 N X n=1 αn(l|M)βn(l|M) = X {F} αN(F|M) = X {I} β0(I|M) where {I} and {F} respectively represent the set of possible initial and final states of M. Viterbi approximation Replacing the sum operator in (1.16) by the max operator is equi- valent to finding the most probable path of length N generating the sequence X, resulting in the Viterbi recursion : P(Xn 1 , qn l) = P(xn|qn l) max {qk} © P(Xn−1 1 , qn−1 k )P(qn l|qn−1 k ) a (1.21) where Xn 1 = {x1, x2, . . . , xn} and P(Xn 1 , qn l) represents the probability of having produced the partial observation sequence Xn 1 while being in state qlat time n and having followed the most probable path. Thu sum over {qk} is applied to the set of possible predecessor states of ql(given by the topology of the HMM). The likelihood P(X|M, Θ) of the most probable path in M is obtained at the end of the sequence and is equal to P(xN 1 , F). Taking minus logarithm of expression (1.21), yields −log P(Xn 1 , qn l) = −log P(xn|qn l) + min{qk} © −log P(Xn−1 1 , qn−1 k ) −log P(qn l|qn−1 k ) a (1.22) which avoids the problem of underflows and restates the problem of fin- ding the maximum likelihood path in terms of a standard dynamic pro- gramming recurrence finding the optimal path in model M minimizing the “accumulated distance” D(N, F
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 20
|
3 ppur_asr
| 0
|
flows and restates the problem of fin- ding the maximum likelihood path in terms of a standard dynamic pro- gramming recurrence finding the optimal path in model M minimizing the “accumulated distance” D(N, F) by using the following recursion : D(n, l) = d(n, l) + min k [D(n −1, k) + dk,l] (1.23)
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 20
|
3 ppur_asr
| 0
|
20 Finite State Automata where D(n, l) = −log P(Xn 1 , qn l), d(n, l) = −log P(xn|qn l), and dk,l= −log P(qn l|qn−1 k ). Running this recursion until until n = N and lreaches the final state F, thus yield −log P(X|M) = D(N, F). During training, the HMM parameters Θ are optimized to maxi- mize the likelihood of a set of training utterances given their asso- ciated (and known during training) HMM model, according to (1.8) where P(X|M, Θ) is computed by taking all possible paths into account (forward recurrence) or only the most probable path (Viterbi recur- rence). Powerful iterative training procedures, based on the Expectation- Maximization (EM) algorithm, exist for both criteria, and have been proved to converge to a local optimum. At each iteration of the EM algo- rithm, the“Expectation”step estimates the most probable segmentation or the best state posterior distribution (referred to as“hidden variables”) based on the current values of the parameters, while the “Maximization” step re-estimates the optimal value of these parameters assuming that the current estimate of the hidden variables is correct. This is further discussed in Section 1.5.2. For further reading of the HMM training and decoding algorithms, see [3, 7, 8, 11]. HMM advantages and drawbacks The most successful application of HMMs is speech recognition. Gi- ven a sequence of acoustic, the goal is to produce a sequence of as- sociated phoneme or word transcription. In order to solve such a pro- blem, one usually associates one HMM per different phoneme (or word). During training, a new HMM is created for each training sentence as the concatenation of the corresponding target phoneme models, and its parameters are maximized. Over the last few years, a number of la- boratories have demonstrated large-vocabulary (at least 1,000 words), speaker-independent
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 21
|
3 ppur_asr
| 0
|
of the corresponding target phoneme models, and its parameters are maximized. Over the last few years, a number of la- boratories have demonstrated large-vocabulary (at least 1,000 words), speaker-independent, continuous speech recognition systems based on HMMs. HMMs are models that can deal efficiently with the temporal aspect of speech (time warping) as well as with frequency distortion. They also benefit from powerful and efficient training and decoding algorithms. For training, only the transcription in terms of the speech units which are trained is necessary and no explicit segmentation of the training material is required. Also, HMMs can easily be extended to include phonological and syntactical rules (at least when these are using the same statistical formalism).
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 21
|
3 ppur_asr
| 0
|
Speech Recognition 21 However, the assumptions that make the efficiency of these models and their optimization possible limit their generality. As a consequence, they also suffer from several drawbacks including, among others : – Poor discrimination due to the training algorithm which maximizes likelihoods instead of a posteriori probabilities P(M|X) (i.e., the HMM associated with each speech unit is trained independently of the other models). – A priori choice of model topology and statistical distributions, e.g., assuming that the probability density functions associated with the HMM state can be described as (mixtures of) multivariate Gaus- sian densities, each with a diagonal-only covariance matrix (i.e., the possible correlation between the components of the acoustic vectors is disregarded). – Assumption that the state sequences are first-order Markov chains. – Assumption that the input observations are not correlated over time. Thus, apart through the HMM topology, the possible tem- poral correlation across features associated with the same HMM state is simply disregarded. In order to overcome some of these problems, many researchers have concentrated on integrating Artificial Neural Networks into the forma- lism of HMMs. In the next section, we expose some of the most promising approaches. 1.4.4 Hybrid HMM/ANN systems The idea of combining HMMs and Artificial Neural Networks (ANN), and more specifically MultiLayer Perceptrons (MLP) [3, 20, 19], was mo- tivated by the observation that HMMs and ANNs had complementary properties : (1) HMMs are clearly dynamic and very well suited to se- quential data, but several assumptions limit their generality ; (2) ANNs can approximate any kind of nonlinear discriminant functions, are very flexible and do not need strong assumptions about the distribution of the input data, but they cannot properly handle time sequences. (7) The- refore a number of hybrid models have been proposed in the literature ; see, e.g., [3]. HMMs are based on a strict probabilistic formalism, making them difficult to interface with other modules in a
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 22
|
3 ppur_asr
| 0
|
The- refore a number of hybrid models have been proposed in the literature ; see, e.g., [3]. HMMs are based on a strict probabilistic formalism, making them difficult to interface with other modules in a heterogeneous system. Ho- wever, it has indeed been shown [3, 17] that if each output unit of an (7)Although recurrent neural networks can indeed handle time, they are known to be difficult to train long term dependencies, and cannot easily incorporate knowledge in their structure as it is the case for HMMs.
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 22
|
3 ppur_asr
| 0
|
22 Finite State Automata ANN (typically a multilayer perceptron) is associated with a state qk of the set of states Q = {q1, q2, . . . , qK} on which the SFSA are defined, it is possible to train the ANN (e.g., according to the usual least means square or relative entropy criteria) to generate good estimates of a pos- teriori probabilities of the output classes conditioned on the input. In other words, if gk(xn|Θ) represents the output function observed on the k-th ANN output unit when the ANN is being presented with the input observation vector xn, we will have : gk(xn|Θ∗) ≈P(sn=qk|xn) (1.24) where Θ∗represents the optimal set of ANN parameters. When using these posterior probabilities (instead of local likelihoods) in SFSA, the model becomes a recognition model (sometimes referred to as stochastic finite state acceptor), where the observation sequence is an input to the system, and where all local and global measures are based on a posteriori probabilities. It was thus necessary the revisit the SFSA basis to accommodate this formalism. In [3, 5], it is shown that P(M|X, Θ) can be expressed in terms conditional transition probabili- ties P(sn|xn, sn−1) and that it is possible to train the optimum ANN parameter set Θ according to the MAP criterion (1.9). The resulting training algorithm [6, 14], referred to as REMAP (Recursive Estimation and Maximization of A Posteriori Probabilities) is a particular form of EM training, directly involving posteriors, where the “Maximization” step involves the (gradient-based) training of the ANN, and where the desired target distribution (required to train the ANN) has been estima- ted in the previous “Expectation” step. Since this EM version includes an iterative “Maximization” step, it is also sometimes referred to as Ge- neralized EM (GEM). As for standard HMMs, there is a
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 23
|
3 ppur_asr
| 0
|
“Expectation” step. Since this EM version includes an iterative “Maximization” step, it is also sometimes referred to as Ge- neralized EM (GEM). As for standard HMMs, there is a full likelihood version (taking all possible paths into account) as well as a Viterbi ver- sion of the training procedure. Another popular solution in using hybrid HMM/ANN as a sequence recognizer is to turn the local posterior probabilities P(sn=k|xn) into scaled likelihoods by dividing these by the estimated value of the class priors as observed on the training data, i.e. : P(sn=qk|xn) P(sn=qk) = P(xn|sn=qk) P(xn) (1.25) These scaled likelihoods are trained discriminatively (using the discri- minant properties of ANN) ; during decoding though, the denominator of the resulting scaled likelihoods P(xn|sn=k) P(xn) is independent of the class
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 23
|
3 ppur_asr
| 0
|
Speech Recognition 23 and simply appears as a normalization constant. The scale likelihoods can thus be simply used in a regular Viterbi or forward recurrence to yield an estimator of the global scaled likelihood [9] : P(X|M, Θ) P(X) = X paths N Y n=1 P(xn|sn) P(xn) P(sn|sn−1) (1.26) where the sum extends over all possible paths of length N in model M. These hybrid HMM/ANN approaches provide more discriminant esti- mates of the emission probabilities needed for HMMs, without requiring strong hypotheses about the statistical distribution of the data. Since this result still holds with modified ANN architectures, the approach has been extended in a number of ways, including : – Extending the input field to accommodate not only the current input vector but also its right and left contexts, leading to HMM systems that take into account the correlation between acoustic vec- tors [3]. – Partially recurrent ANN [25] feeding back previous activation vec- tors on the hidden or output units, leading to some kind of higher- order HMM. 1.5 Acoustic Modeling 1.5.1 Acoustic model Starting from (1.5), the goal of the acoustic model is thus to find the model Mk (representing a specific word sequence) maximizing the like- lihood that has the highest likelihood P(X|Mk, ΘA) of having produced the observed acoustic vector sequence X, i.e. : Mk = argmax Mi log P(X|Mi, Θ∗ A) (1.27) where Θ∗ A represents the optimal parameter set of the acoustic models as obtained after training (as discussed in Section 1.5.2. In order to better approximate the dynamics of speech production, HMMs for ASR are usually composed of multiple states for each phone- mic sound unit (e.g., 2-5). Furthermore, the phoneme classes typically are further subdivided into different contextual classes (e.g., with par- ticular neighbors to the left and right), typically resulting in one HMM model (as illustrated in Figure 1.2
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 24
|
3 ppur_asr
| 0
|
the phoneme classes typically are further subdivided into different contextual classes (e.g., with par- ticular neighbors to the left and right), typically resulting in one HMM model (as illustrated in Figure 1.2) per phonetic unit. This is done in
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 24
|
3 ppur_asr
| 0
|
24 Acoustic Modeling order to better represent the wide variability in sounds that could be associated with any particular phoneme, for which a primary cause is the coarticulation between neighboring speech sounds due to the inertia in the vocal apparatus. Since using all allowable contexts would lead to a large number of the context dependent phoneme classes, the context phonemes are often clustered into larger classes and in this way the number of sub-word units is kept reasonably small. In large systems, acoustic probabilities from differing degrees of context and clustering are integrated together, much as was referred to earlier for the language models. Start End 1.0 0.3 0.7 1.0 ah v ax Start End 0.3 0.7 iy ax dh 1.0 1.0 Fig. 1.3 Lexicon (pronunciation) models for the words “of” and “the”, in terms of subword units (phonemes). Each state in these models could consist of several HMM states, including possible self-loops. As illustrated in Figure 1.3, every lexicon word is then represented by a Markov model where each node is itself a HMM associated with the corresponding phonetic unit model (8), e.g., a 3-state HMM as represented in Figure 1.2. Each lexicon word is thus modeled by the resulting HMM. 1.5.2 Acoustic model (HMM) training As already discussed in Section 1.4, the distributions of feature vec- tors associated with each state of the resulting acoustic HMM model are (8)The phoneme is in practice an idealized unit that does not always correspond to real acoustic events observed in the data. Efforts continue to derive alternative units directly from the data.
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 25
|
3 ppur_asr
| 0
|
Speech Recognition 25 usually modeled by using multi-Gaussian distributions or ANNs, and all the parameters (emission and transition probabilities) of the resulting HMM models are estimated by particular instances of the Expectation- Maximization algorithms. We assume to have access to a large amount of acoustic training data, including numerous utterances (pronounced by multiple speakers, if we target a speaker independent system) simply labelled in terms of their word sequences. Assuming also that we have access to a lexicon repre- senting each word as a Markov model of subbword units (phonemes), each training utterance Xj, with j = 1, . . . , J (with J=total number of training utterances), can be associated with a specific HMM model Mj. As done with EM, the goal of HMM training is then to find the optimal set of parameters Θ∗ A maximizing the log likelihood of the data given the models, i.e. : Θ∗ A= argmax ΘA J X j=1 log P(Xj|Mj, ΘA) (1.28) which can be solved (iteratively) by using particular instances of the Expectation-Maximization algorithms, such as the Forward-Backward or the Viterbi algorithms (see, e.g., [4, 15] for details). In the case of Viterbi training, and as illustrated in Figure 1.4, the training algorithm can then be summarized as follows : 1) Start from an initial set of parameters Θ(0) A (e.g., estimating the pa- rameters from a linear segmentation). 2) For all training utterances Xj and their associated models Mj find the maximum likelihood paths (best paths), maximizing P(Xj|Mj) (Viterbi approximation of the full likelihood P(Xj|Mj)) using the Viterbi recursion (1.21), thus yielding : Q∗= argmax Qj ∈Mj P(Xj, Qj|Mj, Θ(t) A ) yielding a new segmentation of the training data. This step is often referred to “forced align
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 26
|
3 ppur_asr
| 0
|
= argmax Qj ∈Mj P(Xj, Qj|Mj, Θ(t) A ) yielding a new segmentation of the training data. This step is often referred to “forced alignment”, since we are forcing the matching of utterances Xj on given models Mj. 3) Given this new segmentation at iteration t, collect all the vectors (over all utterances Xj) associated with states qk and use standard statistical optimization techniques (or ANN training techniques), to compute a new set of parameters Θ(t+1) A , such that Θ(t+1) A = argmax ΘA P(Xj, Q∗|Mj, ΘA)
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 26
|
3 ppur_asr
| 0
|
26 Language modeling q q q M j X j Fig. 1.4 Illustration of Viterbi training : starting from a linear segmentation (represented by dashed lines), we esti- mate the best path (by Viterbi dynamic programming recur- sion) matching each utterance Xj on its associated model Mj, yielding a new set of parameters Θ. Until convergence, this new set of parameters is then used to find a new optimal segmentation, hence new parameters. 4) Iterate (going back to step 2) as long as QJ j=1 P(Xj|Mj, Θ(t) A ) in- creases or until the relative improvement falls below a pre-defined threshold. Convergence of this iterative process can be proved to converge to a local optimum (which quality will depend on the quality of the initialization). 1.6 Language modeling 1.6.1 Language model The goal of the language model is to estimate the prior probability contributions of the hypothesized models Mi, i.e., P(Mi|ΘL) in (1.5), such that multiplied with the acoustic model likelihood we can find the word sequence (model M) maximizing (1.2). Since every model M is associated with a specific word string W = {w1, w2, . . . , wl, . . . , wL}, the prior probability P(M) is often assumed
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 27
|
3 ppur_asr
| 0
|
Speech Recognition 27 equivalent to the probability of the associated word sequence : P(M) = P(w1, w2, . . . , wl, . . . , wL) (1.29) = L Y l=1 P(wl|w1, w2, . . . , wl−1) (1.30) by simply using the rule of decomposition of joint probabilities. However, estimating all the factors in (1.30) would require too many parameters, since for any reasonable size vocabularies, most of the conditional pro- babilities have just too many arguments. Furthermore, it is quite absurd to believe that the speaker’s choice for the i-th word depends on the entire past word history. Therefore, (1.30) is often simplified into : P(M)= L Y l=1 P(wl|φ(wl) (1.31) where φ(wl) represents the limited history of word wlto reliably estimate the associated conditional probability, and typically refers to a few words prior to the current word. Typically, this history is limited to a group of N words and (1.31) can be approximated by the product of terms that are each dependent on N −1 previous words, using the Markov assumption referred to earlier, and the resulting estimate is called an N- gram probability, as further discussed in Section 1.4. A simple example of this approach is the approximation of an L word-sequence probability by basic and surprisingly powerful trigram language model where in (1.31), we have : P(wl|φ(wl) = P(wl|wl−1, wl−2) (1.32) In large vocabulary recognition tasks, the language model can be somewhat more complicated, including longer N-grams for common se- quences, word class probabilities, etc., and can incorporate either inter- polations or switches between the different sub-models. In smaller tasks, the language model can simply consist of list of words that are allowed to follow each other. In [11], the concept of equivalence class is also introduced where φ(wl) represents the information required to reliably estimate the conditional probabilities to fulfill the following requirements : – Minimize the number of parameters
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 28
|
3 ppur_asr
| 0
|
other. In [11], the concept of equivalence class is also introduced where φ(wl) represents the information required to reliably estimate the conditional probabilities to fulfill the following requirements : – Minimize the number of parameters to estimate – so that available data will be sufficient for the estimation – and will optimize the performance (generalize) on the (unseen) test set.
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 28
|
3 ppur_asr
| 0
|
28 Language modeling which is measured be the language model perplexity, and which is just 2H, where H is the average entropy for a word given the language mo- del (9). In the simple case in which each word has exactly N equally likely successors the perplexity would then be N. In general, adding linguistic predictive power will reduce the perplexity, and this tends to improve recognition performance (although this is not guaranteed ; many resear- chers have been disappointed by recognition results using techniques that reliably reduce perplexity). Explicit representations of syntactic and se- mantic structure can be useful for perplexity reduction, but estimators for P(M) have been dominated by simple N-gram (mainly trigram) sta- tistical approaches. 1.6.2 Language model raining Typically, the language model used should depend on the use the re- cognizer will be put on, and the language model parameters will typically be estimated by processing large amounts of examples of corresponding written material. It will then depend on text only and not in any way on the acoustic signal. From that training materiall (text), the simplest approach to estimate (for instance) trigrams is to estimate relative frequencies, resulting in : P(wl|wl−1, wl−2) ≈N(wl|wl−1, wl−2) .= N(wl, wl−1, wl−2) N(wl−1, wl−2) (1.33) where N(wl|wl−1, wl−2 is the relative frequency function and N(wl, wl−1, wl−2) denotes the number of times the considered word sequence has been ob- served in the training corpus. However, even in very large text corpora, some word sequences (and thus associated trigrams, and even bigrams) are never actually encoun- tered. It is therefore necessary to smooth the trigram frequencies by in- terpolating trigram, bigram, and unigram relative frequencies, resulting in the estimate : P(wl|wl−1, wl−2) ≈λ3N(wl|wl−1, wl−2) + λ2N(wl|wl−1) +
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 29
|
3 ppur_asr
| 0
|
frequencies, resulting in the estimate : P(wl|wl−1, wl−2) ≈λ3N(wl|wl−1, wl−2) + λ2N(wl|wl−1) + λ1N(wl) (1.34) with the constraint that P3 k=1 λk = 1. After having estimated the rela- tive frequencies (which are now fixed) on the training data, the optimal values of λk will be estimated on an independent training set (often (9)The average entropy for all possible word sequences Wj being defined as −P j P(Wj) log P(Wj).
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 29
|
3 ppur_asr
| 0
|
Speech Recognition 29 referred to as development set) through the Expectation-Maximization algorithm. 1.7 Decoding Recognition of an unknown utterance X is performed by solving (1.5) using local HMM probabilities P(xn|qk), and dynamically integrating language model constraints (probabilities). In principle, this problem could be solved by running the forward recurrence (1.15) to estimate (1.17) or its Viterbi approximation (1.22) for all possible models Mi and, after multiplication by the associated language model probability P(Mi), to find the word sequence W associated with the most probable model M (i.e., model M that has the highest likelihood of having generated X) : M = argmax Mi [P(X|Mi)P(Mi)] (1.35) However, while this straightforward approach is tractable for iso- lated words or very limited set of possible sentences, this exhaustive search quickly becomes untractable for large vocabulary problems, as well as for any reasonable size continuous speech recognition problem. To reduce the complexity of the problem, the decoding problem is then re-formulated in terms of a search problem [11, 28]. Starting from the language model, especially in the case of N-gram models (see CHAP- TER... for more discussion about this), we can represent (“unfold”) all legal sentences as a large (possibly infinite) branching tree network, such that at the root (initial node) of the tree there is a branch to all pos- sible start word. All first words are then connected to all possible follow words with some transition probability on the edge associated with the associated (bi-gram) language model probability (10). When expanding that branching tree (possibly dynamically during decoding) we will also some nodes marked as (according to the language model) possible word- end nodes. Of course, this tree can be very large, but if extended deep enough it would, in principle, cover all possible sentences. Furthermore, replacing each node of the tree by the associated word HMM (as discus- sed in Section 1.5), we end up with a very large HMM tree in which we have
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 30
|
3 ppur_asr
| 0
|
, cover all possible sentences. Furthermore, replacing each node of the tree by the associated word HMM (as discus- sed in Section 1.5), we end up with a very large HMM tree in which we have to do the search. (10)In case of higher-order N-gram, we have to take a longer history into account to assign edge transition probabilities. Another possibility, is to turn the higher-order Markov model into a (more complex) first-order Markov model.
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 30
|
3 ppur_asr
| 0
|
30 Decoding In the case of Viterbi approximation (11), temporal variability is typi- cally handled by dynamic time alignment [4, 21] of the incoming speech representation with the pre-stored models. This time alignment, exploi- ting Bellman’s dynamic programming and optimality principle [2], at- tempts to find the path of the best match by local comparisons between possible model state sequences to find the most probable representation for the incoming speech vectors. This algorithm can easily be applied to the large branching tree, where the log likelihood log P(Xn 1 , qn l) of any path from the start node to some point in the tree can evaluated by adding all the (HMM) log state transition, all the (HMM) log state emis- sion probabilities, and all the log language-model probabilities, defined as the accumulated score. In [28, 27], such a path is defined as a token, which has all the accumulated score and history information necessary to pursue the search. In this case, every “active” token will have to be moved to the adjoining nodes, while updating its history and its score according the associate transition probability and current state emis- sion probability. Of course, more we pursue the search, and more tokens we have to move forward into the tree, and some pruning and efficient search strategies have to be designed. Many efficient heuristic search techniques have been developed. As with all search problems, there are two main approaches : – Breadth-first : In this case, all current hypothesis are pursued in parallel, and all scores refer to partial paths of the same length (same instant n in P(Xn 1 , qn l), but for all possible states ql). This is usually refer to Viterbi decoding [4, 21]. However, since the branching tree will usually be very large (especially in the case of large vocabulary continuous speech recognition), some pruning techniques, usually referred to as beam search (see, e.g., [22]), will be required and only the most promising “tokens” will be expanded (12). – Depth-first : In this case, all
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 31
|
3 ppur_asr
| 0
|
referred to as beam search (see, e.g., [22]), will be required and only the most promising “tokens” will be expanded (12). – Depth-first : In this case, all the tokens are sorted (in terms of their associated scores) into a “stack”, and we simply keep pursuing only the most promising hypothesis (token with the best score), as long as it remains on top of the stack. In this case, the tokens in the stack now refer to different path lengths, and to preserve the efficiency of the search (while also preserving at best its “optimality”), we thus have to add to every token score a rough estimate of the contribution (11)Although the approach remains very similar, although somewhat more complex, in the case of full likelihood decoding (12)This can be achieved by pruning out all the tokens for which the score is below a certain absolute or relative (to other tokens) threshold.
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 31
|
3 ppur_asr
| 0
|
Speech Recognition 31 of the (unknown) remaining of the path (equivalent to the backward probability in the“forward-backward”algorithm), to guarantee that the different scores in the stack are somehow comparable. If we make sure to underestimate this contribution, the optimality of the search is preserved. However, more we underestimate that quantity, less efficient the search will be. Examples of depth-first decoders are stack-decoders [11] and A∗-decoders [13, 16]. In both cases, we end the search process when we have reach the end of the sentence. Checking the scores of all possible word-end nodes, and selecting the word-end node with the highest score we can find the as- sociated recognized sentence by looking at the history of the associated token. Finally, it is important to note here that the sub-word units (e.g., pho- nemes) thus never need to be recognized categorically before the final decision is made. They merely represent units that are shared between different words in the lexicon and whose choice is dictated by considering the required lexicon, its distinctive symbolic representation by the cho- sen phonemes and anticipated variations in pronunciations of the words in the lexicon. The hypothesized word sequences are formed by a heu- ristic search that in principle considers the probabilities of the states in all the phonemes in the vocabulary. The hypothesized word sequence is chosen via a competition between various paths that are evaluated at the end of the acoustic input. The evaluation is done in terms of (1.5), consisting of the product of all local acoustic likelihoods corresponding to a state sequence, combined with appropriate language probabilities for each word sequence. The combination of local scores into scores for the entire utterance assumes conditional independence (independence when conditioned on the state variable) and the Markov property (that for both words and states, only a limited history need be considered to represent probability of moving from one to the next). In practice, the least viable hypotheses are pruned from the search to save on com- putation, so not all possible hypothe
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 32
|
3 ppur_asr
| 0
|
and states, only a limited history need be considered to represent probability of moving from one to the next). In practice, the least viable hypotheses are pruned from the search to save on com- putation, so not all possible hypotheses are actually considered. These ˆosearch errors ̈o (those caused by dropping unlikely hypotheses) are ty- pically a fairly small source of errors.
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 32
|
3 ppur_asr
| 0
|
32 Bibliography Bibliography [1] Baum, L.E. and Petrie, T.,“Statistical inference for probabilistic func- tions of finite state Markov chains”, Annals of Mathematical Statistics, vol. 37, pp. 1554-1563, 1996. [2] Bellman, R., Dynamic Programming, Princeton University Press, Princeton, 1957. [3] Bourlard, H. and Morgan, N., Connectionist Speech Recognition - A Hybrid Approach, Kluwer Academic Publishers, 1994. [4] Bourlard, H., Kamp, Y., Ney, H., and Wellekens, C.J., “Speaker Dependent Connected Speech Recognition via Dynamic Program- ming and Statistical Methods”, in Speech and Speaker Recognition, M.R. Schroeder (Ed.), pp.115-148, Karger (Basel), 1985. [5] Bourlard, H., Konig, Y., and Morgan, N., “A training algorithm for statistical sequence recognition with applications to transition-based speech recognition”, IEEE Signal Processing Letters, vol. 3, no. 7, pp. 203-205, 1996. [6] Bourlard, H., Konig, Y., and Morgan, N., “REMAP : Recursive es- timation and maximization of a posteriori probabilities in connectio- nist speech recognition”, Proceedings of EUROSPEECH’95 (Madrid, Spain), pp. 1663-1666, 1995. [7] Deller, J.R., Proakis, J.G., and Hansen, J.H., Discrete-Time Pro- cessing of Speech Signals, MacMillan, 1993. [8] Gold, B., Morgan, N, Speech and Audio Signal Processing, Processing and Perception of Speech and Music, Wiley, 2000. [9] Hennebert, J., Ris, C., Bourlard, H., Renals, S., and Morgan, N., “Estimation of global posteriors and forward-backward training of hybrid HMM/ANN systems,” Proceedings of Eurospeech’97 (Rhodes, Greece), pp. 1951-1954, 1997. [10] Hopcroft, J., Motwani, R., and Ullman
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 33
|
3 ppur_asr
| 0
|
hybrid HMM/ANN systems,” Proceedings of Eurospeech’97 (Rhodes, Greece), pp. 1951-1954, 1997. [10] Hopcroft, J., Motwani, R., and Ullman, J., Introduction to Auto- mata Theory, Language and Computations, Second Edition, Addison- Wesley, 2000. [11] Jelinek, F., 1998, Statistical Methods for Speech Recognition, MIT Press, 1998. [12] Jurafsky, D. and Martin, J.H., Speech and Language Processing - - An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition, Prentice-Hall, 2000. [13] Kenny, P., “A∗admissible heuristics for rapid lexical access”, Proc. IEEE Intl. Conf. on Acoustics, Speech, and Signal Processing (To- ronto), S10.1, 1991.
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 33
|
3 ppur_asr
| 0
|
Speech Recognition 33 [14] Konig, Y., Bourlard, H., and Morgan, N., “REMAP : Recursive estimation and maximization of a posteriori probabilities - Appli- cation to transition-based connectionist speech recognition”, in Ad- vances in Neural Information Processing Systems VIII, D.S. Tou- retzky, M.C. Mozer, and M.E. Hasselmo (Eds.), pp. 388-394, The MIT Press, 1995. [15] Liporace, L.A.,“Maximum likelihood estimation for multivariate ob- servations of Markov sources”, IEEE Trans. on Information Theory, vol. IT-28, no. 5, pp. 729-734, 1992. [16] Paul, D.B.,“Algorithms for an optimal A∗search and linearizing the search in the stack decoder”, Proc. IEEE Intl. Conf. on Acoustics, Speech, and Signal Processing (Toronto), pp. 693-696, 1991. [17] Richard, M.D. and Lippmann, R.P., “Neural network classifiers esti- mate Bayesian a posteriori probabilities, Neural Computation, no. 3, pp. 461-483, 1991. [18] Morgan, N. and Bourlard, H., “Generalization and parameter esti- mation in feedforward nets : some experiments”, in Advances in Neu- ral Information Processing Systems II, D.S. Touretzky (Ed.), pp. 630- 637, San Mateo, CA : Morgan Kaufmann, 1997. [19] Morgan, N. and Bourlard, H., “Continuous speech recognition : An introduction to the hybrid HMM/connectionist approach,”, IEEE Si- gnal Processing Magazine, vol. 12, no. 3, pp. 25-42, 1995. [20] Morgan, N. and Bourlard, H., “Neural networks for statistical recog- nition of continuous speech”, Proceedings of the IEEE, vol. 83, no. 5, pp. 741-770, 1995. [21] Ney, H., “The use of a one-stage dynamic programming algorithm for connected word recognition”, IEEE Trans. on
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 34
|
3 ppur_asr
| 0
|
the IEEE, vol. 83, no. 5, pp. 741-770, 1995. [21] Ney, H., “The use of a one-stage dynamic programming algorithm for connected word recognition”, IEEE Trans. on Acoustics, Speech, and Signal Processing, vol. 32, pp. 263-271, 1984 [22] Ney, H. and Aubert, X., “Dynamic programming search : from digit strings to large vocabulary word graphs”, in Automatic Speech and Speaker Recognition, C.-H. Lee, F.K. Soong, and K.K. Paliwal (eds.), pp. 385-413, Kluwer Academic Publishers, 1996. [23] D.B. Paul, J.K. Baker, and J.M. Baker,“On the interaction between true source, training, and testing language models,” IEEE Proc. Intl. Conf. on Acoustics, Speech, and Signal Processing (Toronto, Canada), pp. 569-572, 1991. [24] Pierce, J., “Whither speech recognition,” J. Acoust. Soc. Am., vol. 46, pp. 1049-1051, 1969.
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 34
|
3 ppur_asr
| 0
|
34 Bibliography [25] Robinson, T., Hochberg, M., and Renals, S., “The Use of recurrent neural networks in continuous speech recognition”, Automatic Speech and Speaker Recognition, Kluwer Academic Publishers, pp. 233-258, 1996. [26] Viterbi, A.J., “Error bounds for convolutional codes and an asym- metrically optimum decoding algorithm”, IEEE Transactions on In- formation Theory, vol. IT-13, pp. 260-267, 1967. [27] Young, S.R., Russell, N.H., and Thornton, J.H.S.,“Token passing : A simple conceptual model for connected speech recognition”, Technical Report CUED/F-INFENG/TR38, Cambridge University Engineering Department, 1989. [28] Young, S., “A review of large-vocabulary continuous-speech recogni- tion”, IEEE Signal Processing Magazine, September 1996.
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 35
|
3 ppur_asr
| 0
|
Chapter 2 Speaker Verification 2.1 Introduction Speech contains many characteristics that are specific to each indi- vidual, many of which are independent of the linguistic message for an utterance, and which, in speech recognition, are generally considered as a source of degradation. For instance, each utterance from an individual is produced by the same vocal tract, tends to have a typical pitch range (particularly for each gender), and has a characteristic articulator move- ment that is associated with dialect or gender. All of these factors have a strong effect on the speech that is highly correlated with the particu- lar individual who is speaking. For this reason, listeners are often able to recognize the speaker identity fairly quickly, even over the telephone. Artificial systems recognizing speakers rather than speech have been the subject of much research over the last 20 years, and commercial systems are already in use. Speaker recognition is a generic term for the classification of a spea- ker’s identity from an acoustic signal. In the case of speaker identification, the speaker is classified as being one of a finite set of speakers. As in the case of speech recognition, this will require the comparison of a speech utterance with a set of references for each potential speaker. For the case of speaker verification, the speaker is classified as having the purported identity or not. That is, the goal is to automatically accept or reject an identity that is claimed by the speaker. In this case, the user will first identify herself/himself (e.g., by introducing or uttering a PIN code), and the distance between the associated reference and the pronounced utterance will be compared to a threshold that is determined during training. Speaker recognition can be based on text-dependent or text- independent utterances, depending on whether or not the recognition process is constrained to a pre-defined text or not. Speaker recognition has many potential applications, including : se- cured use of access cards (e.g., calling and credit credits), access control
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 36
|
3 ppur_asr
| 0
|
36 Acoustic parameters to databases (e.g., telephone and banking applications), access control to facilities, electronic commerce, information and reservation services, remote access to computer networks, etc. Speaker identification and verification each require the calculation of a score reflecting the distance between an utterance and a set of re- ferences. One of the simplest approaches that was initially used was representing each speaker by a single Gaussian (or a set of Gaussians) in the acoustic parameter space. The parameters were estimated on a training set containing several sentences pronounced by each of the po- tential user. Assuming that all speakers were equiprobable and that the cost associated with an error is the same for every speaker, the decision rule consisted in assigning an utterance to the speaker with the closest density function. Variants of this approach are still used today and will be briefly discussed in Section 2.5. As for the case of speech recognition, speaker recognition has be- nefited from extensive applications of Hidden Markov Model (HMM) technology in the 1990’s. The resulting approaches for these two ap- plication areas are very similar. In speaker recognition, though, each speaker is represented by one or several specific HMMs. In the case of text-independent speaker recognition, these HMM models will often be ergodic (fully connected). For text-dependent speaker recognition, spe- cific sentences can be used to model the lexical information in addition to the speaker characteristics. During speaker identification, these mo- dels will be used to compute matching scores associated with the input utterance, and the matching speaker will be the one associated with the closest reference. In the case of speaker verification, this matching score will be computed for the model associated with the claimed identity and some number of alternate models ; the putative speaker model will then either be accepted or rejected based on some measure of its distance from the models of its rivals (e.g., from its closest rival). The decision also incorporates a threshold that is determined during the enrollment of each new speaker. For a good introduction to speaker recognition, we refer the reader to [4, 5]. 2.2 Acoustic parameters
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 37
|
3 ppur_asr
| 0
|
). The decision also incorporates a threshold that is determined during the enrollment of each new speaker. For a good introduction to speaker recognition, we refer the reader to [4, 5]. 2.2 Acoustic parameters In speech recognition, the main goal of the acoustic processing module is to extract features that are invariant to the speaker and channel cha- racteristics, and are representative of the lexical content. On the other
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 37
|
3 ppur_asr
| 0
|
Speaker Verification 37 hand, speaker recognition requires the extraction of speaker characteris- tic features, which may be independent of the particular words that were spoken. Such characteristics include the gross properties of the spectral envelope (such as the average formant positions over many vowels) or the average range of fundamental frequency. Unfortunately, since these features are often difficult to estimate reliably, particularly for a short enrollment period, current systems often use acoustic parameters that have been developed for use in speech recognition. However, LPC pa- rameters (or LPC cepstra), which have fallen out of favor in automatic speech recognition (ASR) due to their strong dependence on individual speaker characteristics, tend to be preferred in speaker recognition for this very reason. In general, though, features that are based on some kind of short-term spectral estimate are used in speaker recognition much as they are in ASR. In addition, though, pitch information is sometimes used if it can be estimated reliably [1, 10]. Finally, the effects of transmission channel variability are usually re- duced by using techniques initially proposed for speech recognition, such as cepstral mean subtraction or RASTA-PLP. However, these techniques also can filter out important speaker-specific characteristics. Further re- search thus seems to be necessary here. 2.3 Similarity measures As we have seen for speech recognition, the problem of speaker re- cognition can be formulated in terms of statistical pattern classification, and the probability that a speaker Sc (rather than some other speaker) has pronounced the sentence associated with the acoustic parameter se- quence X is given by : P(Sc|X) = P(X|Sc)P(Sc) PI i=1 P(X|Si)P(Si) (2.1) where Sc represents the identity being tested (or claimed, in the case of speaker verification), and P(X|Si) the conditional probability of se- quence X given the speaker Si. Ideally, the sum in the denominator should include all possible speakers. In general, this sum will be very large, and in the case of speaker verification, should include all possible rival speakers, which is unfortunately impossible. As for speech recognition, parameters for density estimation
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 38
|
3 ppur_asr
| 0
|
speakers. In general, this sum will be very large, and in the case of speaker verification, should include all possible rival speakers, which is unfortunately impossible. As for speech recognition, parameters for density estimation in spea- ker recognition are determined during a training phase (based on maxi- mum likelihood or discriminant criterion).
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 38
|
3 ppur_asr
| 0
|
38 Similarity measures Once these parameters are determined, the denominator in (2.1) is independent of the class and can be neglected for speaker identification. Consequently, and assuming equal prior probabilities for all speakers, a speaker S will be identified as speaker Sc if : P(X|Sc) ≥P(X|Si), ∀i <unk>= c (2.2) Speaker verification, however, is a form of hypothesis test. In this case, we will verify the hypothesis that a speaker S is indeed the putative speaker Sc if : P(Sc|X) > P(Sc|X) (2.3) where Sc represents the set of all possible rival speakers, and the right hand side is the probability of the speaker being anyone except Sc. Typi- cally, this is stated with some (speaker dependent) margin or threshold, i.e., a speaker S is taken to be the speaker Sc if : P(Sc|X) P(Sc|X) > δc (2.4) where δc is a threshold > 1, which will have to be optimized indepen- dently for each potential customer Sc to guarantee optimal performance of the system (see Section 2.7). Recall that P(Sc|X) = P(S1 or S2 or . . . or Si<unk>=c|X) = X i<unk>=c P(Si|X) if events Si are independent (which is the case) and collectively exhaus- tive (which will often be wrong). Using (2.1), and assuming uniform priors P(Si) over all speakers, criterion (2.4) becomes : S = Sc if P(X|Sc) P(X|Sc) = P(X|Sc) P i<unk>=c P(X|Si) > δ ′ c (2.5) defined as the likelihood ratio criterion, and where the sum over i in- corporates all the possible speakers. (1) Based on the logarithm of the likelihood ratio, we then have : S = Sc if log P(X|Sc) −log P(X|Sc) > ∆c (2.6) where ∆c = log δ ′ c. In general, a large
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 39
|
3 ppur_asr
| 0
|
hood ratio, we then have : S = Sc if log P(X|Sc) −log P(X|Sc) > ∆c (2.6) where ∆c = log δ ′ c. In general, a large enough value of this difference will mean that the identity of Sc is validated, while it will be rejected in the case of a value below the threshold. Finally, the design of a speaker verification system will thus involve : (1)The hypothesized speaker could also be included in the sum. This sometimes yields better estimates and better performance.
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 39
|
3 ppur_asr
| 0
|
Speaker Verification 39 1) The optimal setting of the decision threshold. As discussed in Sec- tion 2.7, this threshold will usually be estimated by assuming that the distributions of P(X|Sc) and P(X|Sc) are Gaussian. Figure 2.1 represents the typical Gaussian approximation to the distributions of likelihoods P(X|Sc) and P(X|Sc) for a specific training and test set. The variability of these distributions also shows the importance of using a similarity measure based on a likelihood ratio measure, as reported in [8, 3]. (2) 2) A good estimation of the normalization factor, as discussed below. As for the case of discriminant approaches in speech recognition [2], one of the difficulties in the development of speaker verification systems is thus the estimation of the normalization factor P(X|Sc). Several so- lutions have been proposed. In one approach, we assume that the set of reference speakers already enrolled in the database is sufficiently repre- sentative of all possible speakers, and the normalization factor can then be estimated as log P(X|Sc) ≈ X Si∈R,i<unk>=c log P(X|Si) (2.7) where R represents the set of speakers already enrolled in the system. One can also assume that the sum in (2.7) is dominated by the closest rival speaker, yielding the approximation log P(X|Sc) ≈ max Si ∈R, i <unk>= c log P(X|Si) (2.8) These solutions are, however, not often practical since : 1) In both cases, it will be necessary to estimate the conditional pro- babilities for all the reference speakers, which will often require too much computation. 2) In (2.8), the value of the maximum conditional probability varies a lot from speaker to speaker, depending on how close the nearest reference speaker is to the test speaker. An alternative solution comprises considering a well chosen subset of reference speakers, usually called “cohort”, on which P(X|Sc) will be estimated. The cohort is usually defined as the group of speakers whose (2)In addition to normalizing scores, the likelihood ratio
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 40
|
3 ppur_asr
| 0
|
, usually called “cohort”, on which P(X|Sc) will be estimated. The cohort is usually defined as the group of speakers whose (2)In addition to normalizing scores, the likelihood ratio will also reduce the effect of some parameters affecting the similarity measure, such as the variability due to differences in transmission channel, as well as change in the speaker voice over time.
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 40
|
3 ppur_asr
| 0
|
40 Text-dependent speaker verification models are determined to be closed to or more “competitive” with the model of the target speaker Sc [8, 16, 3]. A different cohort is thus assigned to every speaker and is automatically determined during the enrollment phase ; it could also eventually be updated during the en- rollment of new users. For this approach, the following approximation is used : log P(X|Sc) ≈ X Si∈Rc log P(X|Si) (2.9) where Rc represents the cohort associated with speaker Sc. Experimental results show that this kind of normalization improves speaker separabi- lity and reduces the sensitivity to the decision threshold. In the spirit of a better approximation to (2.1), it was recently shown in [11] that it can be advantageous to include the model of the hypothesized speaker in the cohort. This improves the behavior of the algorithm in the cases where the acoustics for the actual speaker are rather different from the models for the claimed speaker identity (for instance, for the case of different gender), resulting in very small and unreliable likelihoods. When HMMs are used, another solution consists of approximating P(X|Sc) ≈P(X|M), where M is a speaker independent model that is trained either on a large set of speakers or only on the set of refe- rence speakers. Depending on whether the verification system is text- dependent or text-independent, M will either be a fully connected (er- godic) Markov model or a model representing the sentence to be pro- nounced. Another solution could also consist of decomposing the problem of speaker/non-speaker discrimination into a series of 2-class problems. In [6], this problem was addressed by using many decision trees for each speaker Sc, each tree dealing with the problem of discriminating between Sc and one specific speaker in the cohort (although the cohort was the same for every speaker). The set of decision trees can then be used to approximate P(X|Sc). This approach was also shown to be somewhat less sensitive to the optimal setting of the decision threshold. Following [5], we now briefly discuss some of
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 41
|
3 ppur_asr
| 0
|
of decision trees can then be used to approximate P(X|Sc). This approach was also shown to be somewhat less sensitive to the optimal setting of the decision threshold. Following [5], we now briefly discuss some of the main speaker verifi- cation approaches. 2.4 Text-dependent speaker verification In text dependent speaker verification, the system knows in advance the access password (or sentence) that will be used by the user. For each
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 41
|
3 ppur_asr
| 0
|
Speaker Verification 41 individual, there is a model that encodes both the speaker characteristics as well as the lexical content of the password. In this case, the techniques used for speaker verification are particularly similar to the methods used in speech recognition, namely : 1) Dynamic Time Warping (DTW) approach : in this case, the pass- word of each user is simply represented as a small number of acoustic sequence templates corresponding to pronunciations of the password. During verification, the score associated with a new utterance of the password is computed via dynamic programming (and dynamic time warping) against the reference model(s). This approach is simple and requires relatively little computational resources during enrollment. It has been the basis of several commercial products. 2) HMM approach : in this case, the password associated with each user is represented by an HMM whose parameters are trained from several repetitions of the password. The amount of training required depends on the number of parameters, which can be a practical problem for larger models. Finally, the score associated with a new utterance is computed by either of the methods usually used in speech recogni- tion : the Viterbi algorithm, which finds the best state path, or the forward (α) recurrence, taking all possible paths into account. As ge- nerally found for speech recognition, HMM approaches have generally been found to be more accurate than simple DTW, but at the cost of higher computational requirements during training [17]. Given either of these approaches, a putative speaker identity can be ve- rified using the similarity measure defined in Section 2.3 and comparing it to a decision threshold. 2.5 Text-independent speaker verification In the case of text independent speaker verification, the lexical content of the utterance used for verification cannot be predicted. Since it is im- possible to model all possible word sequences, different approaches have been proposed, including : 1) Methods based on long-term statistics such as the mean and variance calculated on a sufficiently long acoustic sequence. However, these statistics are a minimal representation of spectral characteristics, and can also be sensitive to the variability of the transfer function of the transmission channel. More recently, an alternative approach has been proposed in which
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 42
|
3 ppur_asr
| 0
|
quence. However, these statistics are a minimal representation of spectral characteristics, and can also be sensitive to the variability of the transfer function of the transmission channel. More recently, an alternative approach has been proposed in which the statistics of dynamic variables (e.g., in the cepstral domain) are
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 42
|
3 ppur_asr
| 0
|
42 Text-independent speaker verification used and modelled by a multi-dimensional autoregressive (AR) mo- del [12]. In [7], different distance measures are compared for this AR approach, and it is shown that performance similar to standard HMM approaches can be achieved. It is also shown that the optimal order of the autoregressive process is around 2 or 3. Furthermore, correct nor- malization of the scores according to an a posteriori criterion seems essential to good performance. 2) Methods based on vector quantization. Vector quantization of spec- tral or cepstral vectors can be used to replace the original vector with an index to a codebook entry. In the case of speaker recognition, the spectral characteristics of each speaker can be modelled by one or more codebook entries that are representative of that speaker ; see, for example, [19] for a typical reference. The score associated with an utterance is then defined as the sum of the distances between each acoustic vector in the sequence and its closest prototype vector from the codebook associated with the putative speaker (or code- books associated with the cohort, for the normalization score). It is also possible to use a pitch detector and to define two sets of pro- totypes per speaker, one set each for voiced and unvoiced segments. For the voiced segments, pitch can then be added to the feature set to define the prototypes and compute the distance, requiring a choice of weights for the features. Finally, an alternative to “memoryless” vector quantization (so-called since each vector is quantized independently of its predecessors) was proposed in [9], in which source coding algorithms were used. 3) Fully connected (ergodic) HMMs. In this case, a fully connected HMM is trained during (according to Viterbi or Forward-Backward training) the enrollment of each user. The HMM states can then be defined in a completely arbitrary and unsupervised manner ; in this approach, distances are stochastic and trained, but otherwise the approach is similar to the determination of codebook entries in vector quantiza- tion. Alternatively, states can be associated with specific classes ; e
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 43
|
3 ppur_asr
| 0
|
; in this approach, distances are stochastic and trained, but otherwise the approach is similar to the determination of codebook entries in vector quantiza- tion. Alternatively, states can be associated with specific classes ; e.g., phones or even coarse phonetic categories. Some temporal constraints will generally be included in the models, typically by introducing mi- nimum duration constraints on each state. Finally, several solutions using different topologies, different probability density functions as- sociated with each state, as well as different training criteria, have been proposed, including : – HMMs trained according to a maximum likelihood criterion and ha- ving several (single or multi-) Gaussian states, or just a single multi- Gaussian state [11, 15]. Some discriminant training approaches ty-
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 43
|
3 ppur_asr
| 0
|
Speaker Verification 43 pically used in speech recognition (e.g., Maximum Mutual Infor- mation [2]) have also been used in speaker verification to improve discrimination between users. – Autoregressive HMMs : In this case, the probability distribution as- sociated with each state is estimated via an autoregressive process. Initially introduced by Poritz [14], this approach has been used with success by several laboratories [18]. Later on, this approach was also generalized to the class of HMMs using mixtures of autoregressive processes [20]. 4) Artificial neural networks : Multilayer perceptrons have also been tested on speaker verification problems [13]. A specific neural network that has one or two output units is associated with each speaker. The weights of each network are trained positively using utterances from the corresponding speaker, and negatively on many utterances from rival speakers. 2.6 Text prompted speaker verification Since verification is based on both the speaker characteristics and the lexical content of a secret password, text dependent speaker verification systems are generally more robust than text independent systems. Ho- wever, both kinds of systems are susceptible to fraud, since for typical applications the voice of the speaker could be captured, recorded, and reproduced. In the case of a text-dependent system, even a password could be captured. To limit this risk, speaker verification systems based on prompted text have been developed. In this case, for each access, a recorded or synthetic prompt will ask the user to pronounce a different random sentence [3, 11]. The underlying lexicon could either be very large, or even be limited to the 10 digits, which would then be used to generate random digit strings. The advantage of such an approach is that impostors cannot predict the prompted sentence. Consequently, pre-recorded utterances from the customer will be of no use to the im- postor. During each access, the system will prompt the user with a dif- ferent sentence and a speech recognition system will be used prior to verification to validate the utterance. Finally, even when the utterance is rejected, the user can still be prompted with an additional sentence. Since the new sentence will be different, the acoustic
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 44
|
3 ppur_asr
| 0
|
used prior to verification to validate the utterance. Finally, even when the utterance is rejected, the user can still be prompted with an additional sentence. Since the new sentence will be different, the acoustic vector sequence will not be too correlated with the previous one, which will improve the quality of the estimators by accumulating uncorrelated evidence. This strategy would not be as useful in a text dependent system, since the
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 44
|
3 ppur_asr
| 0
|
44 Identification, verification and the decision threshold repeated sentence would have the same lexical content as the original one. The speech recognition that is used before text-prompted verification is often based on phonetic HMMs, typically using Gaussian or multi- Gaussian distributions. These models are defined to cover the lexicon, and are independently trained on each user. A key difficulty with this approach is that there is typically not much enrollment data available to train the HMMs. For this reason, single-Gaussian single-state phonetic models are often used. Given the enrollment data generally available in speaker verification problems, such simple models have often performed as well as more complex models [3]. During verification, the system knows the prompted sentence and, using the phonetic transcription of the lexicon, can build the associated HMM model by simple concatenation of the constituent phones. The resulting model is then used to first validate the utterance (by computing the confidence level associated with the acoustic vector sequence) and then perform speaker verification. Given score normalization, a similar procedure can be used for the cohort speakers. 2.7 Identification, verification and the decision threshold Each model discussed above can be used to compute a matching score (or a likelihood ratio) between some speaker model and a speech utte- rance. In the case of speaker identification, these scores are computed for all possible reference models and the identified speaker will be recogni- zed as the one yielding the best score. In the case of speaker verification, scores are only computed for the putative speaker model and (when a normalization as discussed in Section 2.3 is used) the cohort models. As illustrated by Figure 2.1, the resulting (normalized) score will be compared to a threshold above which the speaker will be accepted. Es- timation of the optimal threshold is often critical in good performance of the system [8]. If the decision threshold is too high, too many cus- tomers will be rejected as impostors ; such an error is referred to as a false rejection. Such a threshold will screen impostors very well, but at the cost of a high
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 45
|
3 ppur_asr
| 0
|
, too many cus- tomers will be rejected as impostors ; such an error is referred to as a false rejection. Such a threshold will screen impostors very well, but at the cost of a high customer rejection rate. If the threshold is too low, too many impostors will be accepted as customers ; this kind of error is called a false acceptance or (in more general signal detection parlance) a false alarm. Such a threshold will accept customers with little difficulty, but at the cost of a high impostor acceptance rate. In any real task, the
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 45
|
3 ppur_asr
| 0
|
Speaker Verification 45 0 0.0005 0.001 0.0015 0.002 0.0025 -2000 -1500 -1000 -500 0 500 1000 1500 2000 2500 train inter train intra test inter test intra threshold Fig. 2.1 Example of Gaussian approximations of the dis- tributions of P(X|Sc) (the left two Gaussians, respectively for training and test set) and P(X|Sc) (right Gaussians). The vertical line (750) represents the decision threshold cor- responding to the Equal Error Rate (EER) as estimated on the training data. As shown here, the position of the Gaussian can vary from training to test data, depending on the variability of channel and speaker characteristics. This illustrates the importance of using normalized scores, as dis- cussed in Section 2.3. Means and variances were computed on a set of real data corresponding to a specific speaker Sc and a given set of impostors. cost of these two kinds of errors must be assessed for the real applica- tion in order to evaluate the utility of the system. As a convenience for system comparisons, the performance of speaker verification systems is often measured in terms of equal error rate, corresponding to the decision threshold where the false rejection rate is equal to the false acceptance rate. (3) This EER threshold is often estimated by assuming that the two distributions of P(X|Sc) and P(X|Sc) (i.e., fitting two Gaussians on experimental points, obtained for both the customer and a set of rival speakers) are Gaussian and computing the resulting EER point. If this (3)1998 speaker verification systems are reporting EER performance varying between 0.1% and 5%, depending on the conditions.
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 46
|
3 ppur_asr
| 0
|
46 Bibliography EER threshold is computed on the training set, it will be referred to as a priori threshold, while it will be referred to as a posteriori threshold is computed on the test set. Of course, in real systems, the a posteriori EER measure will not be accessible (since in any one application the system operates with some particular scheme for setting the threshold), but EER is often approximated as half of the sum of the two error rates. Bibliography [1] Atal, B.S., Automatic speaker recognition based on pitch contours, Journal of Acoustical Society of America, vol. 52, pp. 1687-1697, 1972. [2] Bahl, L.R., Brown, P.F., de Souza, P.V., and Mercer, R.L., Maximum mutual information estimation of hidden Markov model parameters, Proc. IEEE Intl. Conf. on Acoustics, Speech, and Signal Processing (Tokyo), pp. 49-52, 1986. [3] de Veth, J. & Bourlard, H., Comparison of Hidden Markov Mo- del Techniques for Automatic Speaker Verification in Real-World Conditions, Speech Communication (North-Holland), vol. 17, no. 1-2, pp. 81-90, 1995. [4] Doddington, G., Speaker recognition-identifying people by their voices, Proceedings of the IEEE, vol. 73, pp. 1651-1664, 1985. [5] Furui, S., An overview of speaker recognition technology, in Automatic Speech and Speaker Recognition, C.-H. Lee, F.K. Soong, K.K. Paliwal (eds.), pp. 31-56, Kluwer Academic Publishers, 1996. [6] Genoud, D., Moreira, M., and Mayoraz, E., Text dependent speaker verification using binary classifiers, Proc. IEEE Intl. Conf. on Acous- tics, Speech, and Signal Processing (Seattle, Washington), pp. 129- 132, 1998. [7] Griffin, C., Matsui, T., & Furui, S., Distance measures for text- independent speaker recognition based on MAR model, Proc. IEEE Intl. Conf. Ac
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 47
|
3 ppur_asr
| 0
|
- 132, 1998. [7] Griffin, C., Matsui, T., & Furui, S., Distance measures for text- independent speaker recognition based on MAR model, Proc. IEEE Intl. Conf. Acoustics, Speech and Signal Processing (Adelaide, Aus- tralie), pp. I-309-312, 1994. [8] Higgins, A.L., Bahler, L., & Porter, J., Speaker verification using ran- domized phrase prompting, Digital Signal Processing, vol. 1, pp. 89- 106, 1991. [9] Juang, B.-H. & Soong, F.K., Speaker recognition based on source coding approaches, Proc. IEEE Intl. Conf. on Acoustics, Speech, and Signal Processing (Albuquerque, NM), pp. 613-616, 1990.
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 47
|
3 ppur_asr
| 0
|
Speaker Verification 47 [10] Matsui, T. & Furui, S., Text-independent speaker recognition using vocal tract and pitch information, Proc. IEEE Intl. Conf. on Acous- tics, Speech, and Signal Processing (Albuquerque, NM), pp. 137-140, 1990. [11] Matsui, T. & Furui, S., Concatenated phoneme models for text- variable speaker recognition, Proc. IEEE Intl. Conf. on Acoustics, Speech, and Signal Processing (Minneapolis, MI), pp. II-391-394, 1993. [12] Montacie, C., Deleglise, P., Bimbot, F., & Caraty, M.-J., Cinematic techniques for speech processing : temporal decomposition and multi- variate linear prediction, Proc. IEEE Intl. Conf. on Acoustics, Speech, and Signal Processing (San Francisco, CA), pp. I-153-156, 1992. [13] Oglesby, J. & Mason, J.S., Optimization of neural models for spea- ker identification, Proc. IEEE Intl. Conf. on Acoustics, Speech, and Signal Processing (Albuquerque, NM), pp. 261-264, 1990. [14] Poritz, A.B., Linear predictive hidden Markov models and the speech signal, Proc. IEEE Intl. Conf. on Acoustics, Speech, and Signal Pro- cessing (Paris, France), pp. 1291-1294, 1982. [15] Rose, R.C. & Reynolds, R.A., Text independent speaker identifica- tion using automatic acoustic segmentation, Proc. IEEE Intl. Conf. on Acoustics, Speech, and Signal Processing, (Albuquerque, NM), pp. 293-296, 1990. [16] Rosenberg, A.E., DeLong, J., Lee, C.-H., Juang, B.H., and Soong, F.K., The use of cohort normalized scores for speaker verifica- tion, Proc. Intl. Conf. on Spoken Language Processing, pp. 599-602, 1992. [17] Rosenberg, A.E.
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 48
|
3 ppur_asr
| 0
|
K., The use of cohort normalized scores for speaker verifica- tion, Proc. Intl. Conf. on Spoken Language Processing, pp. 599-602, 1992. [17] Rosenberg, A.E., Lee, C.-H., & Gokcen, S., Connected word talker verification using whole word hidden Markov models, Proc. IEEE Intl. Conf. on Acoustics, Speech, and Signal Processing (Toronto, Canada), pp. 381-384, 1991. [18] Savic, M. & Gupta, S.K., Variable parameter speaker verification system based on hidden Markov modeling, Proc. IEEE Intl. Conf. on Acoustics, Speech, and Signal Processing (Albuquerque, NM), pp. 281-284, 1990. [19] Soong, F.K., Rosenberg, A.E., Rabiner, L.R., & Juang, B.-H., A vector quantization approach to speaker recognition, Proc. IEEE Intl. Conf. on Acoustics, Speech, and Signal Processing (Tampa, FL), pp. 387-390, 1985.
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 48
|
3 ppur_asr
| 0
|
48 Bibliography [20] Tishby, N.Z., On the application of mixture AR hidden Markov mo- dels to text independent speaker recognition, IEEE Trans. Acoustics, Speech, and Signal Processing, vol. ASSP-30, pp. 563-570, 1991.
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/3 ppur_asr.pdf
| 49
|
3 ppur_asr
| 0
|
Libraries and Mapping Giovanni De Micheli Integrated Systems Laboratory This presentation can be used for non-commercial purposes as long as this note and the copyright footers are not removed © Giovanni De Micheli – All rights reserved
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT15 (lib).pdf
| 0
|
DT15 (lib)
| 0
|
(c) Giovanni De Micheli 2 Module 1 NObjective LLibraries LProblem formulation and analysis LAlgorithms for library binding based on structural methods
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT15 (lib).pdf
| 1
|
DT15 (lib)
| 0
|
(c) Giovanni De Micheli 3 Library binding NGiven an unbound logic network and a set of library cells LTransform into an interconnection of instances of library cells LOptimize delay M (under area or power constraints) LOptimize area M Under delay and/or power constraints LOptimize power M Under delay and/or area constraints NLibrary binding is called also technology mapping LRedesigning circuits in different technologies
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT15 (lib).pdf
| 2
|
DT15 (lib)
| 0
|
(c) Giovanni De Micheli 4 Major approaches NRule-based systems LGeneric, handle all types of cells and situations LHard to obtain circuit with specific properties LData base: M Set of pattern pairs M Local search: detect pattern, implement its best realization NHeuristic algorithms LTypically restricted to single-output combinational cells LLibrary described by cell functionality and parameters NMost systems use a combination of both approaches: LRules are used for I/Os, high buffering requirements, ...
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT15 (lib).pdf
| 3
|
DT15 (lib)
| 0
|
(c) Giovanni De Micheli 5 Library binding: issues NMatching: LA cell matches a sub-network when their terminal behavior is the same LTautology problem LInput-variable assignment problem NCovering: LA cover of an unbound network is a partition into sub-networks which can be replaced by library cells. LBinate covering problem
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT15 (lib).pdf
| 4
|
DT15 (lib)
| 0
|
(c) Giovanni De Micheli 6 Assumptions NNetwork granularity is fine LDecomposition into base functions: L2-input AND, OR, NAND, NOR NTrivial binding LUse base cells to realize decomposed network LThere exists always a trivial binding: M Base-cost solution...
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT15 (lib).pdf
| 5
|
DT15 (lib)
| 0
|
(c) Giovanni De Micheli 7 Example
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT15 (lib).pdf
| 6
|
DT15 (lib)
| 0
|
(c) Giovanni De Micheli 8 Example x = b + c y = ax z = xd AND2 4 Cost OR2 Library OA21 4 5 v2 v3 v1 b c d z y x a b c d a x z y m1: {v1,OR2} m2: {v2,AND2} m3: {v3,AND2} m4: {v1,v2,OA21} m5: {v1,v3,OA21} x v3 v2 v1
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT15 (lib).pdf
| 7
|
DT15 (lib)
| 0
|
(c) Giovanni De Micheli 9 Example N Vertex covering: L Covering v1 : ( m1 + m4 + m5 ) L Covering v2 : ( m2 + m4 ) L Covering v3 : ( m3 + m5 ) N Input compatibility: L Match m2 requires m1 M (m’2 + m1) L Match m3 requires m1 M (m’3 + m1 ) N Overall binate covering clause L (m1+m4+m5) (m2+m4)(m3+m5)(m’2+m1)(m’3+m1) = 1 x v3 v2 v1
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT15 (lib).pdf
| 8
|
DT15 (lib)
| 0
|
(c) Giovanni De Micheli 10 Heuristic approach to library binding NSplit problem into various stages: LDecomposition M Cast network and library in standard form M Decompose into base functions M Example, NAND2 and INV LPartitioning M Break network into cones M Reduce to many multi-input, single-output networks LCovering M Cover each sub-network by library cells NMost tools use this strategy LSometimes stages are merged
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT15 (lib).pdf
| 9
|
DT15 (lib)
| 0
|
(c) Giovanni De Micheli 11 Decomposition
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT15 (lib).pdf
| 10
|
DT15 (lib)
| 0
|
(c) Giovanni De Micheli 12 Partitioning
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT15 (lib).pdf
| 11
|
DT15 (lib)
| 0
|
(c) Giovanni De Micheli 13 Covering
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT15 (lib).pdf
| 12
|
DT15 (lib)
| 0
|
(c) Giovanni De Micheli 14 Heuristic algorithms NStructural approach LModel functions by patterns M Example: tree, dags LRely on pattern matching techniques NBoolean approach LUse Boolean models LSolve the tautology problem M Use BDD technology LMore powerful
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT15 (lib).pdf
| 13
|
DT15 (lib)
| 0
|
(c) Giovanni De Micheli 15 Example NBoolean vs. structural matching N f = xy + x’y’ + y’z N g = xy + x’y’ + xz NFunction equality is a tautology L Boolean match NPatterns may be different L Structural match may not exist
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT15 (lib).pdf
| 14
|
DT15 (lib)
| 0
|
(c) Giovanni De Micheli 17 Example
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT15 (lib).pdf
| 15
|
DT15 (lib)
| 0
|
(c) Giovanni De Micheli 19 Example SUBJECT TREE PATTERN TREES cost = 2 INV cost = 3 NAND cost = 4 AND cost = 5 OR
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT15 (lib).pdf
| 16
|
DT15 (lib)
| 0
|
(c) Giovanni De Micheli 20 Example: Lib Match of s: t1 cost = 2 s r u t Match of u: t2 cost = 3 s r u t Match of t: t1 cost = 2+3 = 5 s r u t Match of t: t3 cost = 4 s r u t Match of r: t2 cost = 3+2+4 =9 s r u t Match of r: t4 cost = 5+3 =8
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT15 (lib).pdf
| 17
|
DT15 (lib)
| 0
|
(c) Giovanni De Micheli 21 Tree covering NDynamic programming LVisit subject tree bottom up NAt each vertex LAttempt to match: M Locally rooted subtree to all library cell M Find best match and record LThere is always a match when the base cells are in the library NBottom-up search yields and optimum cover NCaveat: LMapping into trees is a distortion for some cells LOverall optimality is weakened by the overall strategy of splitting into several stages
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT15 (lib).pdf
| 18
|
DT15 (lib)
| 0
|
(c) Giovanni De Micheli 22 Different covering problems NCovering for minimum area: LEach cell has a fixed area cost (label) LArea is additive: M Add area of match to cost of sub-trees NCovering for minimum delay: LDelay is fanout independent M Delay computed with (max, +) rules M Add delay of match to highest cost of sub-trees LDelay is fanout dependent M Look-ahead scheme is required
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT15 (lib).pdf
| 19
|
DT15 (lib)
| 0
|
(c) Giovanni De Micheli 23 Simple library
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT15 (lib).pdf
| 20
|
DT15 (lib)
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.