text
stringlengths 35
1.54k
| source
stringclasses 1
value | page
int64 1
800
| book
stringclasses 1
value | chunk_index
int64 0
0
|
|---|---|---|---|---|
chapter 16. structured probabilistic models for deep learning the model until it is just barely possible to train or use. we often use models whose marginal distributions cannot be computed, and are satisfied simply to draw approximate samples from these models. we often train models with an intractable objective function that we cannot even approximate in a reasonable amount of time, but we are still able to approximately train the model if we can [UNK] obtain an estimate of the gradient of such a function. the deep learning approach is often to figure out what the minimum amount of information we absolutely need is, and then to figure out how to get a reasonable approximation of that information as quickly as possible. 16. 7. 1 example : the restricted boltzmann machine the restricted boltzmann machine ( rbm ) (, ) or smolensky 1986 harmonium is the quintessential example of how graphical models are used for deep learning. the rbm is not itself a deep model. instead, it has a single layer of latent variables that may be used to learn a representation for the input. in chapter, we will 20 see how rbms can be used to build many deeper models. here, we show how the rbm exemplifi
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 602
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
single layer of latent variables that may be used to learn a representation for the input. in chapter, we will 20 see how rbms can be used to build many deeper models. here, we show how the rbm exemplifies many of the practices used in a wide variety of deep graphical models : its units are organized into large groups called layers, the connectivity between layers is described by a matrix, the connectivity is relatively dense, the model is designed to allow [UNK] gibbs sampling, and the emphasis of the model design is on freeing the training algorithm to learn latent variables whose semantics were not specified by the designer. later, in section, we will revisit the rbm 20. 2 in more detail. the canonical rbm is an energy - based model with binary visible and hidden units. its energy function is e, ( v h b ) = −v c − h v − w h, ( 16. 10 ) where b, c, and w are unconstrained, real - valued, learnable parameters. we can see that the model is divided into two groups of units : v and h, and the interaction between them is described by a matrix w. the model is depicted graphically in figure
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 602
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##ined, real - valued, learnable parameters. we can see that the model is divided into two groups of units : v and h, and the interaction between them is described by a matrix w. the model is depicted graphically in figure. as this figure makes clear, an important aspect of this model is 16. 14 that there are no direct interactions between any two visible units or between any two hidden units ( hence the “ restricted, ” a general boltzmann machine may have arbitrary connections ). the restrictions on the rbm structure yield the nice properties p ( ) = π h v | ip ( hi | v ) ( 16. 11 ) 587
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 602
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 16. structured probabilistic models for deep learning h1h1 h2h2 h3h3 v1 v1 v2 v2 v3 v3 h4h4 figure 16. 14 : an rbm drawn as a markov network. and p ( ) = π v h | ip ( vi | h ). ( 16. 12 ) the individual conditionals are simple to compute as well. for the binary rbm we obtain : p ( hi = 1 ) = | v σ vw :, i + bi, ( 16. 13 ) p ( hi = 0 ) = 1 | v −σ vw :, i + bi. ( 16. 14 ) together these properties allow for [UNK] block gibbs sampling, which alter - nates between sampling all of h simultaneously and sampling all of v simultane - ously. samples generated by gibbs sampling from an rbm model are shown in figure. 16. 15 since the energy function itself is just a linear function of the parameters, it is easy to take its derivatives. for example, ∂ ∂wi, j e, ( v h ) = −vihj. ( 16. 15 ) these two properties — [UNK] gibbs sampling and [UNK] derivatives — make
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 603
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
function of the parameters, it is easy to take its derivatives. for example, ∂ ∂wi, j e, ( v h ) = −vihj. ( 16. 15 ) these two properties — [UNK] gibbs sampling and [UNK] derivatives — make training convenient. in chapter, we will see that undirected models may be 18 trained by computing such derivatives applied to samples from the model. training the model induces a representation h of the data v. we can often use eh h [UNK] ( | v ) [ ] h as a set of features to describe. v overall, the rbm demonstrates the typical deep learning approach to graph - ical models : representation learning accomplished via layers of latent variables, combined with [UNK] interactions between layers parametrized by matrices. the language of graphical models provides an elegant, flexible and clear language for describing probabilistic models. in the chapters ahead, we use this language, among other perspectives, to describe a wide variety of deep probabilistic models. 588
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 603
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 16. structured probabilistic models for deep learning figure 16. 15 : samples from a trained rbm, and its weights. image reproduced with permission from ( ). lisa 2008 ( left ) samples from a model trained on mnist, drawn using gibbs sampling. each column is a separate gibbs sampling process. each row represents the output of another 1, 000 steps of gibbs sampling. successive samples are highly correlated with one another. the corresponding weight vectors. compare ( right ) this to the samples and weights of a linear factor model, shown in figure. the samples 13. 2 here are much better because the rbm priorp ( h ) is not constrained to be factorial. the rbm can learn which features should appear together when sampling. on the other hand, the rbm posterior is factorial, while the sparse coding posterior is not, p ( ) h v | p ( ) h v | so the sparse coding model may be better for feature extraction. other models are able to have both a non - factorial and a non - factorial. p ( ) h p ( ) h v | 589
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 604
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 17 monte carlo methods randomized algorithms fall into two rough categories : las vegas algorithms and monte carlo algorithms. las vegas algorithms always return precisely the correct answer ( or report that they failed ). these algorithms consume a random amount of resources, usually memory or time. in contrast, monte carlo algorithms return answers with a random amount of error. the amount of error can typically be reduced by expending more resources ( usually running time and memory ). for any fixed computational budget, a monte carlo algorithm can provide an approximate answer. many problems in machine learning are so [UNK] that we can never expect to obtain precise answers to them. this excludes precise deterministic algorithms and las vegas algorithms. instead, we must use deterministic approximate algorithms or monte carlo approximations. both approaches are ubiquitous in machine learning. in this chapter, we focus on monte carlo methods. 17. 1 sampling and monte carlo methods many important technologies used to accomplish machine learning goals are based on drawing samples from some probability distribution and using these samples to form a monte carlo estimate of some desired quantity. 17. 1. 1 why sampling? there are many reasons that we may wish to draw samples from a probability distribution. sampling provides a flexible way to approximate many sums and 590
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 605
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 17. monte carlo methods integrals at reduced cost. sometimes we use this to provide a significant speedup to a costly but tractable sum, as in the case when we subsample the full training cost with minibatches. in other cases, our learning algorithm requires us to approximate an intractable sum or integral, such as the gradient of the log partition function of an undirected model. in many other cases, sampling is actually our goal, in the sense that we want to train a model that can sample from the training distribution. 17. 1. 2 basics of monte carlo sampling when a sum or an integral cannot be computed exactly ( for example the sum has an exponential number of terms and no exact simplification is known ) it is often possible to approximate it using monte carlo sampling. the idea is to view the sum or integral as if it was an expectation under some distribution and to approximate the expectation by a corresponding average. let s = x p f e ( ) x ( ) = x p [ ( ) ] f x ( 17. 1 ) or s = p f d e ( ) x ( ) x x = p [ ( ) ] f x ( 17. 2 ) be the sum or integral to
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 606
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
x ( ) = x p [ ( ) ] f x ( 17. 1 ) or s = p f d e ( ) x ( ) x x = p [ ( ) ] f x ( 17. 2 ) be the sum or integral to estimate, rewritten as an expectation, with the constraint that p is a probability distribution ( for the sum ) or a probability density ( for the integral ) over random variable. x we can approximate s by drawing n samples x ( 1 ),..., x ( ) n from p and then forming the empirical average [UNK] = 1 n n i = 1 f ( x ( ) i ). ( 17. 3 ) this approximation is justified by a few [UNK] properties. the first trivial observation is that the estimator [UNK] is unbiased, since e [ [UNK] ] = 1 n n i = 1 e [ ( f x ( ) i ) ] = 1 n n i = 1 s s. = ( 17. 4 ) but in addition, the law of large numbers states that if the samples x ( ) i are i. i. d., then the average converges almost surely to the expected value : lim [UNK] = s, ( 17. 5 ) 591
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 606
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 17. monte carlo methods provided that the variance of the individual terms, var [ f ( x ( ) i ) ], is bounded. to see this more clearly, consider the variance of [UNK] as n increases. the variance var [ [UNK] ] decreases and converges to 0, so long as var [ ( f x ( ) i ) ] < ∞ : var [ [UNK] ] = 1 n2 n i = 1 var [ ( ) ] f x ( 17. 6 ) = var [ ( ) ] f x n. ( 17. 7 ) this convenient result also tells us how to estimate the uncertainty in a monte carlo average or equivalently the amount of expected error of the monte carlo approximation. we compute both the empirical average of the f ( x ( ) i ) and their empirical variance, 1 and then divide the estimated variance by the number of samples n to obtain an estimator of var [ [UNK] ]. the central limit theorem tells us that the distribution of the average, [UNK], converges to a normal distribution with mean s and variance var [ ( ) ] f x n. this allows us to estimate confidence intervals around the estimate [UNK], using the cumulative distribution of the normal density. however, all this relies on our ability to easily
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 607
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
normal distribution with mean s and variance var [ ( ) ] f x n. this allows us to estimate confidence intervals around the estimate [UNK], using the cumulative distribution of the normal density. however, all this relies on our ability to easily sample from the base distribution p ( x ), but doing so is not always possible. when it is not feasible to sample from p, an alternative is to use importance sampling, presented in section. a 17. 2 more general approach is to form a sequence of estimators that converge towards the distribution of interest. that is the approach of monte carlo markov chains ( section ). 17. 3 17. 2 importance sampling an important step in the decomposition of the integrand ( or summand ) used by the monte carlo method in equation is deciding which part of the integrand should 17. 2 play the role the probability p ( x ) and which part of the integrand should play the role of the quantity f ( x ) whose expected value ( under that probability distribution ) is to be estimated. there is no unique decomposition because p ( x ) f ( x ) can always be rewritten as p f q ( ) x ( ) = x ( ) x p f ( ) x ( )
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 607
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
under that probability distribution ) is to be estimated. there is no unique decomposition because p ( x ) f ( x ) can always be rewritten as p f q ( ) x ( ) = x ( ) x p f ( ) x ( ) x q ( ) x, ( 17. 8 ) where we now sample from q and average pf q. in many cases, we wish to compute an expectation for a given p and an f, and the fact that the problem is specified 1the unbiased estimator of the variance is often preferred, in which the sum of squared [UNK] is divided by instead of. n −1 n 592
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 607
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 17. monte carlo methods from the start as an expectation suggests that this p and f would be a natural choice of decomposition. however, the original specification of the problem may not be the the optimal choice in terms of the number of samples required to obtain a given level of accuracy. fortunately, the form of the optimal choice q∗can be derived easily. the optimal q∗corresponds to what is called optimal importance sampling. because of the identity shown in equation, any monte carlo estimator 17. 8 [UNK] = 1 n n i, = 1 x ( ) i [UNK] f ( x ( ) i ) ( 17. 9 ) can be transformed into an importance sampling estimator [UNK] = 1 n n i, = 1 x ( ) i [UNK] p ( x ( ) i ) ( f x ( ) i ) q ( x ( ) i ). ( 17. 10 ) we see readily that the expected value of the estimator does not depend on : q eq [ [UNK] ] = eq [ [UNK] ] = s. ( 17. 11 ) however, the variance of an importance sampling estimator can be greatly sensitive to the choice of. the variance is given by q var [ [UNK] ] = var [ p
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 608
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
[UNK] ] = eq [ [UNK] ] = s. ( 17. 11 ) however, the variance of an importance sampling estimator can be greatly sensitive to the choice of. the variance is given by q var [ [UNK] ] = var [ p f ( ) x ( ) x q ( ) x ] / n. ( 17. 12 ) the minimum variance occurs when is q q∗ ( ) = x p f ( ) x | ( ) x | z, ( 17. 13 ) where z is the normalization constant, chosen so that q∗ ( x ) sums or integrates to 1 as appropriate. better importance sampling distributions put more weight where the integrand is larger. in fact, when f ( x ) does not change sign, var [ [UNK] ] = 0, meaning that when the optimal distribution is used. a single sample is [UNK] of course, this is only because the computation of q∗has essentially solved the original problem, so it is usually not practical to use this approach of drawing a single sample from the optimal distribution. any choice of sampling distribution q is valid ( in the sense of yielding the correct expected value ) and q∗is the optimal one ( in the sense of yielding minimum variance ). sampling from q∗is usually in
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 608
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
single sample from the optimal distribution. any choice of sampling distribution q is valid ( in the sense of yielding the correct expected value ) and q∗is the optimal one ( in the sense of yielding minimum variance ). sampling from q∗is usually infeasible, but other choices of q can be feasible while still reducing the variance somewhat. 593
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 608
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 17. monte carlo methods another approach is to use biased importance sampling, which has the advantage of not requiring normalized p or q. in the case of discrete variables, the biased importance sampling estimator is given by [UNK] = n i = 1 p ( x ( ) i ) q ( x ( ) i ) f ( x ( ) i ) n i = 1 p ( x ( ) i ) q ( x ( ) i ) ( 17. 14 ) = n i = 1 p ( x ( ) i ) [UNK] ( x ( ) i ) f ( x ( ) i ) n i = 1 p ( x ( ) i ) [UNK] ( x ( ) i ) ( 17. 15 ) = n i = 1 [UNK] ( x ( ) i ) [UNK] ( x ( ) i ) f ( x ( ) i ) n i = 1 [UNK] ( x ( ) i ) [UNK] ( x ( ) i ), ( 17. 16 ) where [UNK] and [UNK] are the unnormalized forms of p and q and the x ( ) i are the samples from q. this estimator is biased because e [ [UNK] bis ] = s, except asymptotically when n →∞and the denominator of equation converges to
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 609
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
p and q and the x ( ) i are the samples from q. this estimator is biased because e [ [UNK] bis ] = s, except asymptotically when n →∞and the denominator of equation converges to 1. hence this estimator 17. 14 is called asymptotically unbiased. although a good choice of q can greatly improve the [UNK] of monte carlo estimation, a poor choice of q can make the [UNK] much worse. going back to equation, we see that if there are samples of 17. 12 q for which p f ( ) x | ( ) x | q ( ) x is large, then the variance of the estimator can get very large. this may happen when q ( x ) is tiny while neither p ( x ) nor f ( x ) are small enough to cancel it. the q distribution is usually chosen to be a very simple distribution so that it is easy to sample from. when x is high - dimensional, this simplicity in q causes it to match p or p f | | poorly. when q ( x ( ) i ) p ( x ( ) i ) | f ( x ( ) i ) |, importance sampling collects useless samples ( summing tiny numbers or
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 609
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
in q causes it to match p or p f | | poorly. when q ( x ( ) i ) p ( x ( ) i ) | f ( x ( ) i ) |, importance sampling collects useless samples ( summing tiny numbers or zeros ). on the other hand, when q ( x ( ) i ) p ( x ( ) i ) | f ( x ( ) i ) |, which will happen more rarely, the ratio can be huge. because these latter events are rare, they may not show up in a typical sample, yielding typical underestimation of s, compensated rarely by gross overestimation. such very large or very small numbers are typical when x is high dimensional, because in high dimension the dynamic range of joint probabilities can be very large. in spite of this danger, importance sampling and its variants have been found very useful in many machine learning algorithms, including deep learning algorithms. for example, see the use of importance sampling to accelerate training in neural language models with a large vocabulary ( section ) or other neural nets 12. 4. 3. 3 with a large number of outputs. see also how importance sampling has been used to estimate a partition function ( the normalization constant of a probability 594
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 609
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 17. monte carlo methods distribution ) in section, and to estimate the log - likelihood in deep directed 18. 7 models such as the variational autoencoder, in section. importance sampling 20. 10. 3 may also be used to improve the estimate of the gradient of the cost function used to train model parameters with stochastic gradient descent, particularly for models such as classifiers where most of the total value of the cost function comes from a small number of misclassified examples. sampling more [UNK] examples more frequently can reduce the variance of the gradient in such cases (, ). hinton 2006 17. 3 markov chain monte carlo methods in many cases, we wish to use a monte carlo technique but there is no tractable method for drawing exact samples from the distribution pmodel ( x ) or from a good ( low variance ) importance sampling distribution q ( x ). in the context of deep learning, this most often happens when pmodel ( x ) is represented by an undirected model. in these cases, we introduce a mathematical tool called a markov chain to approximately sample from pmodel ( x ). the family of algorithms that use markov chains to perform monte carlo estimates is called markov chain monte
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 610
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
undirected model. in these cases, we introduce a mathematical tool called a markov chain to approximately sample from pmodel ( x ). the family of algorithms that use markov chains to perform monte carlo estimates is called markov chain monte carlo methods ( mcmc ). markov chain monte carlo methods for machine learning are described at greater length in koller and friedman 2009 ( ). the most standard, generic guarantees for mcmc techniques are only applicable when the model does not assign zero probability to any state. therefore, it is most convenient to present these techniques as sampling from an energy - based model ( ebm ) p ( x ) [UNK] − exp ( e ( ) ) x as described in section. in the ebm formulation, every 16. 2. 4 state is guaranteed to have non - zero probability. mcmc methods are in fact more broadly applicable and can be used with many probability distributions that contain zero probability states. however, the theoretical guarantees concerning the behavior of mcmc methods must be proven on a case - by - case basis for [UNK] families of such distributions. in the context of deep learning, it is most common to rely on the most general theoretical guarantees that naturally apply to all energy - based models. to understand why drawing samples from
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 610
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
on a case - by - case basis for [UNK] families of such distributions. in the context of deep learning, it is most common to rely on the most general theoretical guarantees that naturally apply to all energy - based models. to understand why drawing samples from an energy - based model is [UNK], consider an ebm over just two variables, defining a distribution a b. in order p (, ) to sample a, we must draw a from p ( a b | ), and in order to sample b, we must draw it from p ( b a | ). it seems to be an intractable chicken - and - egg problem. directed models avoid this because their graph is directed and acyclic. to perform ancestral sampling one simply samples each of the variables in topological order, conditioning on each variable ’ s parents, which are guaranteed to have already been sampled ( section ). ancestral sampling defines an [UNK], single - pass method 16. 3 595
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 610
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 17. monte carlo methods of obtaining a sample. in an ebm, we can avoid this chicken and egg problem by sampling using a markov chain. the core idea of a markov chain is to have a state x that begins as an arbitrary value. over time, we randomly update x repeatedly. eventually x becomes ( very nearly ) a fair sample from p ( x ). formally, a markov chain is defined by a random state x and a transition distribution t ( x | x ) specifying the probability that a random update will go to state xif it starts in state x. running the markov chain means repeatedly updating the state x to a value x sampled from t ( x | x ). to gain some theoretical understanding of how mcmc methods work, it is useful to reparametrize the problem. first, we restrict our attention to the case where the random variable x has countably many states. we can then represent the state as just a positive integer x. [UNK] integer values of x map back to [UNK] states in the original problem. x consider what happens when we run infinitely many markov chains in parallel. all of the states of the [UNK] markov chains are drawn from some distribution q ( )
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 611
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
values of x map back to [UNK] states in the original problem. x consider what happens when we run infinitely many markov chains in parallel. all of the states of the [UNK] markov chains are drawn from some distribution q ( ) t ( x ), where t indicates the number of time steps that have elapsed. at the beginning, q ( 0 ) is some distribution that we used to arbitrarily initialize x for each markov chain. later, q ( ) t is influenced by all of the markov chain steps that have run so far. our goal is for q ( ) t ( ) x to converge to. p x ( ) because we have reparametrized the problem in terms of positive integer x, we can describe the probability distribution using a vector, with q v q i v ( = x ) = i. ( 17. 17 ) consider what happens when we update a single markov chain ’ s state x to a new state x. the probability of a single state landing in state xis given by q ( + 1 ) t ( x ) = x q ( ) t ( ) ( x t x | x. ) ( 17. 18 ) using our integer parametrization,
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 611
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
the probability of a single state landing in state xis given by q ( + 1 ) t ( x ) = x q ( ) t ( ) ( x t x | x. ) ( 17. 18 ) using our integer parametrization, we can represent the [UNK] of the transition operator using a matrix. we define so that t a a ai, j = ( t x = = ) i | x j. ( 17. 19 ) using this definition, we can now rewrite equation. rather than writing it in 17. 18 terms of q and t to understand how a single state is updated, we may now use v and a to describe how the entire distribution over all the [UNK] markov chains ( running in parallel ) shifts as we apply an update : v ( ) t = av ( 1 ) t−. ( 17. 20 ) 596
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 611
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 17. monte carlo methods applying the markov chain update repeatedly corresponds to multiplying by the matrix a repeatedly. in other words, we can think of the process as exponentiating the matrix : a v ( ) t = atv ( 0 ). ( 17. 21 ) the matrix a has special structure because each of its columns represents a probability distribution. such matrices are called stochastic matrices. if there is a non - zero probability of transitioning from any state x to any other state xfor some power t, then the perron - frobenius theorem (, ; perron 1907 frobenius 1908, ) guarantees that the largest eigenvalue is real and equal to. over time, we can 1 see that all of the eigenvalues are exponentiated : v ( ) t = v λ v diag ( ) −1t v ( 0 ) = ( ) v diag λ tv −1v ( 0 ). ( 17. 22 ) this process causes all of the eigenvalues that are not equal to to decay to zero. 1 under some additional mild conditions, a is guaranteed to have only one eigenvector with eigenvalue. the process thus converges to a 1 stationary distribution,
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 612
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
the eigenvalues that are not equal to to decay to zero. 1 under some additional mild conditions, a is guaranteed to have only one eigenvector with eigenvalue. the process thus converges to a 1 stationary distribution, sometimes also called the. at convergence, equilibrium distribution v = = av v, ( 17. 23 ) and this same condition holds for every additional step. this is an eigenvector equation. to be a stationary point, v must be an eigenvector with corresponding eigenvalue. this condition guarantees that once we have reached the stationary 1 distribution, repeated applications of the transition sampling procedure do not change the over the states of all the various markov chains ( although distribution transition operator does change each individual state, of course ). if we have chosen t correctly, then the stationary distribution q will be equal to the distribution p we wish to sample from. we will describe how to choose t shortly, in section. 17. 4 most properties of markov chains with countable states can be generalized to continuous variables. in this situation, some authors call the markov chain a harris chain but we use the term markov chain to describe both conditions. in general, a markov chain with transition operator t will
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 612
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
with countable states can be generalized to continuous variables. in this situation, some authors call the markov chain a harris chain but we use the term markov chain to describe both conditions. in general, a markov chain with transition operator t will converge, under mild conditions, to a fixed point described by the equation q ( x ) = [UNK] ( x | x ), ( 17. 24 ) which in the discrete case is just rewriting equation. when 17. 23 x is discrete, the expectation corresponds to a sum, and when x is continuous, the expectation corresponds to an integral. 597
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 612
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 17. monte carlo methods regardless of whether the state is continuous or discrete, all markov chain methods consist of repeatedly applying stochastic updates until eventually the state begins to yield samples from the equilibrium distribution. running the markov chain until it reaches its equilibrium distribution is called “ burning in ” the markov chain. after the chain has reached equilibrium, a sequence of infinitely many samples may be drawn from from the equilibrium distribution. they are identically distributed but any two successive samples will be highly correlated with each other. a finite sequence of samples may thus not be very representative of the equilibrium distribution. one way to mitigate this problem is to return only every n successive samples, so that our estimate of the statistics of the equilibrium distribution is not as biased by the correlation between an mcmc sample and the next several samples. markov chains are thus expensive to use because of the time required to burn in to the equilibrium distribution and the time required to transition from one sample to another reasonably decorrelated sample after reaching equilibrium. if one desires truly independent samples, one can run multiple markov chains in parallel. this approach uses extra parallel computation to eliminate latency. the strategy of using only a single markov chain to generate all samples and the strategy
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 613
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
sample after reaching equilibrium. if one desires truly independent samples, one can run multiple markov chains in parallel. this approach uses extra parallel computation to eliminate latency. the strategy of using only a single markov chain to generate all samples and the strategy of using one markov chain for each desired sample are two extremes ; deep learning practitioners usually use a number of chains that is similar to the number of examples in a minibatch and then draw as many samples as are needed from this fixed set of markov chains. a commonly used number of markov chains is 100. another [UNK] is that we do not know in advance how many steps the markov chain must run before reaching its equilibrium distribution. this length of time is called the mixing time. it is also very [UNK] to test whether a markov chain has reached equilibrium. we do not have a precise enough theory for guiding us in answering this question. theory tells us that the chain will converge, but not much more. if we analyze the markov chain from the point of view of a matrix a acting on a vector of probabilities v, then we know that the chain mixes when at has [UNK] lost all of the eigenvalues from a besides the unique eigenvalue of. 1 this means
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 613
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
point of view of a matrix a acting on a vector of probabilities v, then we know that the chain mixes when at has [UNK] lost all of the eigenvalues from a besides the unique eigenvalue of. 1 this means that the magnitude of the second largest eigenvalue will determine the mixing time. however, in practice, we cannot actually represent our markov chain in terms of a matrix. the number of states that our probabilistic model can visit is exponentially large in the number of variables, so it is infeasible to represent v, a, or the eigenvalues of a. due to these and other obstacles, we usually do not know whether a markov chain has mixed. instead, we simply run the markov chain for an amount of time that we roughly estimate to be [UNK], and use heuristic methods to determine whether the chain has mixed. these heuristic methods include manually inspecting samples or measuring correlations between 598
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 613
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 17. monte carlo methods successive samples. 17. 4 gibbs sampling so far we have described how to draw samples from a distribution q ( x ) by repeatedly updating x x ← [UNK] ( x | x ). however, we have not described how to ensure that q ( x ) is a useful distribution. two basic approaches are considered in this book. the first one is to derive t from a given learned pmodel, described below with the case of sampling from ebms. the second one is to directly parametrize t and learn it, so that its stationary distribution implicitly defines the pmodel of interest. examples of this second approach are discussed in sections and. 20. 12 20. 13 in the context of deep learning, we commonly use markov chains to draw samples from an energy - based model defining a distribution pmodel ( x ). in this case, we want the q ( x ) for the markov chain to be pmodel ( x ). to obtain the desired q ( ) x, we must choose an appropriate t ( x | x ). a conceptually simple and [UNK] approach to building a markov chain that samples from pmodel ( x ) is to use gibbs sampling,
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 614
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
. to obtain the desired q ( ) x, we must choose an appropriate t ( x | x ). a conceptually simple and [UNK] approach to building a markov chain that samples from pmodel ( x ) is to use gibbs sampling, in which sampling from t ( x | x ) is accomplished by selecting one variable xi and sampling it from pmodel conditioned on its neighbors in the undirected graph g defining the structure of the energy - based model. it is also possible to sample several variables at the same time so long as they are conditionally independent given all of their neighbors. as shown in the rbm example in section, all of the hidden units of an 16. 7. 1 rbm may be sampled simultaneously because they are conditionally independent from each other given all of the visible units. likewise, all of the visible units may be sampled simultaneously because they are conditionally independent from each other given all of the hidden units. gibbs sampling approaches that update many variables simultaneously in this way are called block gibbs sampling. alternate approaches to designing markov chains to sample from pmodel are possible. for example, the metropolis - hastings algorithm is widely used in other disciplines. in the context of the deep learning approach to undirected modeling,
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 614
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
called block gibbs sampling. alternate approaches to designing markov chains to sample from pmodel are possible. for example, the metropolis - hastings algorithm is widely used in other disciplines. in the context of the deep learning approach to undirected modeling, it is rare to use any approach other than gibbs sampling. improved sampling techniques are one possible research frontier. 17. 5 the challenge of mixing between separated modes the primary [UNK] involved with mcmc methods is that they have a tendency to mix poorly. ideally, successive samples from a markov chain designed to sample 599
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 614
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 17. monte carlo methods from p ( x ) would be completely independent from each other and would visit many [UNK] regions in x space proportional to their probability. instead, especially in high dimensional cases, mcmc samples become very correlated. we refer to such behavior as slow mixing or even failure to mix. mcmc methods with slow mixing can be seen as inadvertently performing something resembling noisy gradient descent on the energy function, or equivalently noisy hill climbing on the probability, with respect to the state of the chain ( the random variables being sampled ). the chain tends to take small steps ( in the space of the state of the markov chain ), from a configuration x ( 1 ) t− to a configuration x ( ) t, with the energy e ( x ( ) t ) generally lower or approximately equal to the energy e ( x ( 1 ) t− ), with a preference for moves that yield lower energy configurations. when starting from a rather improbable configuration ( higher energy than the typical ones from p ( x ) ), the chain tends to gradually reduce the energy of the state and only occasionally move to another mode. once the chain has found a region of low energy ( for example, if
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 615
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##guration ( higher energy than the typical ones from p ( x ) ), the chain tends to gradually reduce the energy of the state and only occasionally move to another mode. once the chain has found a region of low energy ( for example, if the variables are pixels in an image, a region of low energy might be a connected manifold of images of the same object ), which we call a mode, the chain will tend to walk around that mode ( following a kind of random walk ). once in a while it will step out of that mode and generally return to it or ( if it finds an escape route ) move towards another mode. the problem is that successful escape routes are rare for many interesting distributions, so the markov chain will continue to sample the same mode longer than it should. this is very clear when we consider the gibbs sampling algorithm ( section ). 17. 4 in this context, consider the probability of going from one mode to a nearby mode within a given number of steps. what will determine that probability is the shape of the “ energy barrier ” between these modes. transitions between two modes that are separated by a high energy barrier ( a region of low probability ) are exponentially less likely ( in terms of the height of the energy barrier )
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 615
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
that probability is the shape of the “ energy barrier ” between these modes. transitions between two modes that are separated by a high energy barrier ( a region of low probability ) are exponentially less likely ( in terms of the height of the energy barrier ). this is illustrated in figure. the problem arises when there are multiple modes with 17. 1 high probability that are separated by regions of low probability, especially when each gibbs sampling step must update only a small subset of variables whose values are largely determined by the other variables. as a simple example, consider an energy - based model over two variables a and b, which are both binary with a sign, taking on values −1 1 and. if e ( a b, ) = −wab for some large positive number w, then the model expresses a strong belief that a and b have the same sign. consider updating b using a gibbs sampling step with a = 1. the conditional distribution over b is given by p ( b = 1 | a = 1 ) = σ ( w ). if w is large, the sigmoid saturates, and the probability of also assigning b to be 1 is close to 1. likewise, if a = −1, the probability of assigning b to
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 615
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
= σ ( w ). if w is large, the sigmoid saturates, and the probability of also assigning b to be 1 is close to 1. likewise, if a = −1, the probability of assigning b to be −1 is close to 1. according to pmodel ( a b, ), both signs of both variables are equally likely. 600
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 615
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 17. monte carlo methods figure 17. 1 : paths followed by gibbs sampling for three distributions, with the markov chain initialized at the mode in both cases. ( left ) a multivariate normal distribution with two independent variables. gibbs sampling mixes well because the variables are independent. a multivariate normal distribution with highly correlated variables. ( center ) the correlation between variables makes it [UNK] for the markov chain to mix. because the update for each variable must be conditioned on the other variable, the correlation reduces the rate at which the markov chain can move away from the starting point. ( right ) a mixture of gaussians with widely separated modes that are not axis - aligned. gibbs sampling mixes very slowly because it is [UNK] to change modes while altering only one variable at a time. according to pmodel ( a b | ), both variables should have the same sign. this means that gibbs sampling will only very rarely flip the signs of these variables. in more practical scenarios, the challenge is even greater because we care not only about making transitions between two modes but more generally between all the many modes that a real model might contain. if several such transitions are [UNK] because of the [UNK] of mixing between modes, then it becomes very expensive to obtain a
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 616
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
greater because we care not only about making transitions between two modes but more generally between all the many modes that a real model might contain. if several such transitions are [UNK] because of the [UNK] of mixing between modes, then it becomes very expensive to obtain a reliable set of samples covering most of the modes, and convergence of the chain to its stationary distribution is very slow. sometimes this problem can be resolved by finding groups of highly dependent units and updating all of them simultaneously in a block. unfortunately, when the dependencies are complicated, it can be computationally intractable to draw a sample from the group. after all, the problem that the markov chain was originally introduced to solve is this problem of sampling from a large group of variables. in the context of models with latent variables, which define a joint distribution pmodel ( x h, ), we often draw samples of x by alternating between sampling from pmodel ( x h | ) and sampling from pmodel ( h x | ). from the point of view of mixing 601
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 616
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 17. monte carlo methods figure 17. 2 : an illustration of the slow mixing problem in deep probabilistic models. each panel should be read left to right, top to bottom. ( left ) consecutive samples from gibbs sampling applied to a deep boltzmann machine trained on the mnist dataset. consecutive samples are similar to each other. because the gibbs sampling is performed in a deep graphical model, this similarity is based more on semantic rather than raw visual features, but it is still [UNK] for the gibbs chain to transition from one mode of the distribution to another, for example by changing the digit identity. consecutive ( right ) ancestral samples from a generative adversarial network. because ancestral sampling generates each sample independently from the others, there is no mixing problem. rapidly, we would like pmodel ( h x | ) to have very high entropy. however, from the point of view of learning a useful representation of h, we would like h to encode enough information about x to reconstruct it well, which implies that h and x should have very high mutual information. these two goals are at odds with each other. we often learn generative models that very precisely encode x into h but are not able to mix very well. this
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 617
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
it well, which implies that h and x should have very high mutual information. these two goals are at odds with each other. we often learn generative models that very precisely encode x into h but are not able to mix very well. this situation arises frequently with boltzmann machines — the sharper the distribution a boltzmann machine learns, the harder it is for a markov chain sampling from the model distribution to mix well. this problem is illustrated in figure. 17. 2 all this could make mcmc methods less useful when the distribution of interest has a manifold structure with a separate manifold for each class : the distribution is concentrated around many modes and these modes are separated by vast regions of high energy. this type of distribution is what we expect in many classification problems and would make mcmc methods converge very slowly because of poor mixing between modes. 602
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 617
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 17. monte carlo methods 17. 5. 1 tempering to mix between modes when a distribution has sharp peaks of high probability surrounded by regions of low probability, it is [UNK] to mix between the [UNK] modes of the distribution. several techniques for faster mixing are based on constructing alternative versions of the target distribution in which the peaks are not as high and the surrounding valleys are not as low. energy - based models provide a particularly simple way to do so. so far, we have described an energy - based model as defining a probability distribution p e. ( ) exp ( x [UNK] − ( ) ) x ( 17. 25 ) energy - based models may be augmented with an extra parameter β controlling how sharply peaked the distribution is : pβ ( ) exp ( ( ) ) x [UNK] −βe x. ( 17. 26 ) the β parameter is often described as being the reciprocal of the temperature, reflecting the origin of energy - based models in statistical physics. when the temperature falls to zero and rises to infinity, the energy - based model becomes β deterministic. when the temperature rises to infinity and β falls to zero, the distribution ( for discrete ) becomes uniform. x typically, a model is trained to be
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 618
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
and rises to infinity, the energy - based model becomes β deterministic. when the temperature rises to infinity and β falls to zero, the distribution ( for discrete ) becomes uniform. x typically, a model is trained to be evaluated at β = 1. however, we can make use of other temperatures, particularly those where β < 1. tempering is a general strategy of mixing between modes of p1 rapidly by drawing samples with. β < 1 markov chains based on tempered transitions (, ) temporarily neal 1994 sample from higher - temperature distributions in order to mix to [UNK] modes, then resume sampling from the unit temperature distribution. these techniques have been applied to models such as rbms ( salakhutdinov 2010, ). another approach is to use parallel tempering (, ), in which the markov chain iba 2001 simulates many [UNK] states in parallel, at [UNK] temperatures. the highest temperature states mix slowly, while the lowest temperature states, at temperature 1, provide accurate samples from the model. the transition operator includes stochastically swapping states between two [UNK] temperature levels, so that a [UNK] high - probability sample from a high - temperature slot can jump into a lower temperature slot. this approach has also been applied to rb
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 618
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
the model. the transition operator includes stochastically swapping states between two [UNK] temperature levels, so that a [UNK] high - probability sample from a high - temperature slot can jump into a lower temperature slot. this approach has also been applied to rbms ( desjardins et al. et al., ; 2010 cho, ). although tempering is a promising approach, at 2010 this point it has not allowed researchers to make a strong advance in solving the challenge of sampling from complex ebms. one possible reason is that there are critical temperatures around which the temperature transition must be very slow ( as the temperature is gradually reduced ) in order for tempering to be [UNK]. 603
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 618
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 17. monte carlo methods 17. 5. 2 depth may help mixing when drawing samples from a latent variable model p ( h x, ), we have seen that if p ( h x | ) encodes x too well, then sampling from p ( x h | ) will not change x very much and mixing will be poor. one way to resolve this problem is to make h be a deep representation, that encodes into in such a way that a markov chain in x h the space of h can mix more easily. many representation learning algorithms, such as autoencoders and rbms, tend to yield a marginal distribution over h that is more uniform and more unimodal than the original data distribution over x. it can be argued that this arises from trying to minimize reconstruction error while using all of the available representation space, because minimizing reconstruction error over the training examples will be better achieved when [UNK] training examples are easily distinguishable from each other in h - space, and thus well separated. bengio 2013a et al. ( ) observed that deeper stacks of regularized autoencoders or rbms yield marginal distributions in the top - level h - space that appeared more spread out and more uniform, with less of a gap between the
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 619
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
bengio 2013a et al. ( ) observed that deeper stacks of regularized autoencoders or rbms yield marginal distributions in the top - level h - space that appeared more spread out and more uniform, with less of a gap between the regions corresponding to [UNK] modes ( categories, in the experiments ). training an rbm in that higher - level space allowed gibbs sampling to mix faster between modes. it remains however unclear how to exploit this observation to help better train and sample from deep generative models. despite the [UNK] of mixing, monte carlo techniques are useful and are often the best tool available. indeed, they are the primary tool used to confront the intractable partition function of undirected models, discussed next. 604
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 619
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 18 confronting the partition function in section we saw that many probabilistic models ( commonly known as undi - 16. 2. 2 rected graphical models ) are defined by an unnormalized probability distribution [UNK] ( x ; θ ). we must normalize [UNK] by dividing by a partition function z ( θ ) in order to obtain a valid probability distribution : p ( ; ) = x θ 1 z ( ) θ [UNK]. ( ; ) x θ ( 18. 1 ) the partition function is an integral ( for continuous variables ) or sum ( for discrete variables ) over the unnormalized probability of all states : [UNK] d ( ) x x ( 18. 2 ) or x [UNK]. ( ) x ( 18. 3 ) this operation is intractable for many interesting models. as we will see in chapter, several deep learning models are designed to 20 have a tractable normalizing constant, or are designed to be used in ways that do not involve computing p ( x ) at all. however, other models directly confront the challenge of intractable partition functions. in this chapter, we describe techniques used for training and evaluating models that have intractable partition functions. 605
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 620
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 18. confronting the partition function 18. 1 the log - likelihood gradient what makes learning undirected models by maximum likelihood particularly [UNK] is that the partition function depends on the parameters. the gradient of the log - likelihood with respect to the parameters has a term corresponding to the gradient of the partition function : ∇θ log ( ; ) = p x θ ∇θ log [UNK] ( ; ) x θ −∇θ log ( ) z θ. ( 18. 4 ) this is a well - known decomposition into the positive phase and negative phase of learning. for most undirected models of interest, the negative phase is [UNK]. models with no latent variables or with few interactions between latent variables typically have a tractable positive phase. the quintessential example of a model with a straightforward positive phase and [UNK] negative phase is the rbm, which has hidden units that are conditionally independent from each other given the visible units. the case where the positive phase is [UNK], with complicated interactions between latent variables, is primarily covered in chapter. this chapter focuses 19 on the [UNK] of the negative phase. let us look more closely at the gradient of : log z ∇θ log z ( 18. 5 ) = ∇θz z ( 18. 6 ) = ∇
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 621
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
primarily covered in chapter. this chapter focuses 19 on the [UNK] of the negative phase. let us look more closely at the gradient of : log z ∇θ log z ( 18. 5 ) = ∇θz z ( 18. 6 ) = ∇θ x [UNK] ( ) x z ( 18. 7 ) = x [UNK] ( ) x z. ( 18. 8 ) for models that guarantee p ( x ) > 0 for all x, we can substitute exp ( log [UNK] ( ) ) x for [UNK] ( ) x : x ∇θexp ( log [UNK] ( ) ) x z ( 18. 9 ) = x exp ( log [UNK] ( ) ) x ∇θ log [UNK] ( ) x z ( 18. 10 ) = x [UNK] ( ) x ∇θ log [UNK] ( ) x z ( 18. 11 ) = x p ( ) x ∇θ log [UNK] ( ) x ( 18. 12 ) 606
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 621
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 18. confronting the partition function = ex x [UNK] ( ) ∇θ log [UNK]. ( ) x ( 18. 13 ) this derivation made use of summation over discrete x, but a similar result applies using integration over continuous x. in the continuous version of the derivation, we use leibniz ’ s rule for [UNK] under the integral sign to obtain the identity ∇θ [UNK] d ( ) x x = ∇θ [UNK] d. ( ) x x ( 18. 14 ) this identity is applicable only under certain regularity conditions on [UNK] and [UNK] ( x ). in measure theoretic terms, the conditions are : ( i ) the unnormalized distribution [UNK] must be a lebesgue - integrable function of x for every value of θ ; ( ii ) the gradient [UNK] ( x ) must exist for all θ and almost all x ; ( iii ) there must exist an integrable function r ( x ) that bounds [UNK] ( x ) in the sense that maxi | ∂ ∂θ i [UNK] ( x ) | ≤r ( x ) for all θ and almost all x. fortunately, most machine learning models of interest have these properties. this identity ∇θ log = z ex x [UNK] ( ) ∇θ log [UNK] ( ) x ( 18
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 622
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
( x ) | ≤r ( x ) for all θ and almost all x. fortunately, most machine learning models of interest have these properties. this identity ∇θ log = z ex x [UNK] ( ) ∇θ log [UNK] ( ) x ( 18. 15 ) is the basis for a variety of monte carlo methods for approximately maximizing the likelihood of models with intractable partition functions. the monte carlo approach to learning undirected models provides an intuitive framework in which we can think of both the positive phase and the negative phase. in the positive phase, we increase log [UNK] ( x ) for x drawn from the data. in the negative phase, we decrease the partition function by decreasing log [UNK] ( x ) drawn from the model distribution. in the deep learning literature, it is common to parametrize log [UNK] in terms of an energy function ( equation ). in this case, we can interpret the positive 16. 7 phase as pushing down on the energy of training examples and the negative phase as pushing up on the energy of samples drawn from the model, as illustrated in figure. 18. 1 18. 2 stochastic maximum likelihood and contrastive divergence the naive way of implementing equation is to compute it by burning in 18. 15 a set of markov
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 622
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
samples drawn from the model, as illustrated in figure. 18. 1 18. 2 stochastic maximum likelihood and contrastive divergence the naive way of implementing equation is to compute it by burning in 18. 15 a set of markov chains from a random initialization every time the gradient is needed. when learning is performed using stochastic gradient descent, this means the chains must be burned in once per gradient step. this approach leads to the 607
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 622
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 18. confronting the partition function training procedure presented in algorithm. the high cost of burning in the 18. 1 markov chains in the inner loop makes this procedure computationally infeasible, but this procedure is the starting point that other more practical algorithms aim to approximate. algorithm 18. 1 a naive mcmc algorithm for maximizing the log - likelihood with an intractable partition function using gradient ascent. set, the step size, to a small positive number. set k, the number of gibbs steps, high enough to allow burn in. perhaps 100 to train an rbm on a small image patch. while not converged do sample a minibatch of examples m { x ( 1 ),..., x ( ) m } from the training set. g ← 1 m m i = 1 ∇θ log [UNK] ( x ( ) i ; ) θ. initialize a set of m samples { [UNK] ( 1 ),..., [UNK] ( ) m } to random values ( e. g., from a uniform or normal distribution, or possibly a distribution with marginals matched to the model ’ s marginals ). for do i k = 1 to for do j m = 1 to [UNK] ( ) j ←gibbs _ update (
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 623
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
., from a uniform or normal distribution, or possibly a distribution with marginals matched to the model ’ s marginals ). for do i k = 1 to for do j m = 1 to [UNK] ( ) j ←gibbs _ update ( [UNK] ( ) j ). end for end for g g ← − 1 m m i = 1 ∇θ log [UNK] ( [UNK] ( ) i ; ) θ. θ θ ← +. g end while we can view the mcmc approach to maximum likelihood as trying to achieve balance between two forces, one pushing up on the model distribution where the data occurs, and another pushing down on the model distribution where the model samples occur. figure illustrates this process. the two forces correspond to 18. 1 maximizing log [UNK] and minimizing log z. several approximations to the negative phase are possible. each of these approximations can be understood as making the negative phase computationally cheaper but also making it push down in the wrong locations. because the negative phase involves drawing samples from the model ’ s distri - bution, we can think of it as finding points that the model believes in strongly. because the negative phase acts to reduce the probability of those points, they are generally considered to represent the model ’ s incorrect
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 623
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
model ’ s distri - bution, we can think of it as finding points that the model believes in strongly. because the negative phase acts to reduce the probability of those points, they are generally considered to represent the model ’ s incorrect beliefs about the world. they are frequently referred to in the literature as “ hallucinations ” or “ fantasy particles. ” in fact, the negative phase has been proposed as a possible explanation 608
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 623
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 18. confronting the partition function x p ( x ) the positive phase pmodel ( ) x pdata ( ) x x p ( x ) the negative phase pmodel ( ) x pdata ( ) x figure 18. 1 : the view of algorithm as having a “ positive phase ” and “ negative phase. ” 18. 1 ( left ) in the positive phase, we sample points from the data distribution, and push up on their unnormalized probability. this means points that are likely in the data get pushed up on more. ( right ) in the negative phase, we sample points from the model distribution, and push down on their unnormalized probability. this counteracts the positive phase ’ s tendency to just add a large constant to the unnormalized probability everywhere. when the data distribution and the model distribution are equal, the positive phase has the same chance to push up at a point as the negative phase has to push down. when this occurs, there is no longer any gradient ( in expectation ) and training must terminate. for dreaming in humans and other animals ( crick and mitchison 1983, ), the idea being that the brain maintains a probabilistic model of the world and follows the gradient of log [UNK]
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 624
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
gradient ( in expectation ) and training must terminate. for dreaming in humans and other animals ( crick and mitchison 1983, ), the idea being that the brain maintains a probabilistic model of the world and follows the gradient of log [UNK] while experiencing real events while awake and follows the negative gradient of log [UNK] to minimize log z while sleeping and experiencing events sampled from the current model. this view explains much of the language used to describe algorithms with a positive and negative phase, but it has not been proven to be correct with neuroscientific experiments. in machine learning models, it is usually necessary to use the positive and negative phase simultaneously, rather than in separate time periods of wakefulness and rem sleep. as we will see in section, other machine learning algorithms draw samples from the model 19. 5 distribution for other purposes and such algorithms could also provide an account for the function of dream sleep. given this understanding of the role of the positive and negative phase of learning, we can attempt to design a less expensive alternative to algorithm. 18. 1 the main cost of the naive mcmc algorithm is the cost of burning in the markov chains from a random initialization at each step. a natural solution is to initialize the markov chains from
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 624
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
a less expensive alternative to algorithm. 18. 1 the main cost of the naive mcmc algorithm is the cost of burning in the markov chains from a random initialization at each step. a natural solution is to initialize the markov chains from a distribution that is very close to the model distribution, 609
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 624
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 18. confronting the partition function so that the burn in operation does not take as many steps. the contrastive divergence ( cd, or cd - k to indicate cd with k gibbs steps ) algorithm initializes the markov chain at each step with samples from the data distribution ( hinton 2000 2010,, ). this approach is presented as algorithm. 18. 2 obtaining samples from the data distribution is free, because they are already available in the data set. initially, the data distribution is not close to the model distribution, so the negative phase is not very accurate. fortunately, the positive phase can still accurately increase the model ’ s probability of the data. after the positive phase has had some time to act, the model distribution is closer to the data distribution, and the negative phase starts to become accurate. algorithm 18. 2 the contrastive divergence algorithm, using gradient ascent as the optimization procedure. set, the step size, to a small positive number. set k, the number of gibbs steps, high enough to allow a markov chain sampling from p ( x ; θ ) to mix when initialized from pdata. perhaps 1 - 20 to train an rbm on a small image patch. while not converged do sample a minibatch of examples
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 625
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
to allow a markov chain sampling from p ( x ; θ ) to mix when initialized from pdata. perhaps 1 - 20 to train an rbm on a small image patch. while not converged do sample a minibatch of examples m { x ( 1 ),..., x ( ) m } from the training set. g ← 1 m m i = 1 ∇θ log [UNK] ( x ( ) i ; ) θ. for do i m = 1 to [UNK] ( ) i ←x ( ) i. end for for do i k = 1 to for do j m = 1 to [UNK] ( ) j ←gibbs _ update ( [UNK] ( ) j ). end for end for g g ← − 1 m m i = 1 ∇θ log [UNK] ( [UNK] ( ) i ; ) θ. θ θ ← +. g end while of course, cd is still an approximation to the correct negative phase. the main way that cd qualitatively fails to implement the correct negative phase is that it fails to suppress regions of high probability that are far from actual training examples. these regions that have high probability under the model but low probability under the data generating distribution are called spurious modes. figure illustrates why this happens. essentially, it
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 625
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
is that it fails to suppress regions of high probability that are far from actual training examples. these regions that have high probability under the model but low probability under the data generating distribution are called spurious modes. figure illustrates why this happens. essentially, it is because modes in the 18. 2 model distribution that are far from the data distribution will not be visited by 610
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 625
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 18. confronting the partition function x p ( x ) pmodel ( ) x pdata ( ) x figure 18. 2 : an illustration of how the negative phase of contrastive divergence ( algo - rithm ) can fail to suppress spurious modes. a spurious mode is a mode that is 18. 2 present in the model distribution but absent in the data distribution. because contrastive divergence initializes its markov chains from data points and runs the markov chain for only a few steps, it is unlikely to visit modes in the model that are far from the data points. this means that when sampling from the model, we will sometimes get samples that do not resemble the data. it also means that due to wasting some of its probability mass on these modes, the model will struggle to place high probability mass on the correct modes. for the purpose of visualization, this figure uses a somewhat simplified concept of distance — the spurious mode is far from the correct mode along the number line in r. this corresponds to a markov chain based on making local moves with a single x variable in r. for most deep probabilistic models, the markov chains are based on gibbs sampling and can make non
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 626
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
mode along the number line in r. this corresponds to a markov chain based on making local moves with a single x variable in r. for most deep probabilistic models, the markov chains are based on gibbs sampling and can make non - local moves of individual variables but cannot move all of the variables simultaneously. for these problems, it is usually better to consider the edit distance between modes, rather than the euclidean distance. however, edit distance in a high dimensional space is [UNK] to depict in a 2 - d plot. markov chains initialized at training points, unless is very large. k carreira - perpinan and hinton 2005 ( ) showed experimentally that the cd estimator is biased for rbms and fully visible boltzmann machines, in that it converges to [UNK] points than the maximum likelihood estimator. they argue that because the bias is small, cd could be used as an inexpensive way to initialize a model that could later be fine - tuned via more expensive mcmc methods. bengio and delalleau 2009 ( ) showed that cd can be interpreted as discarding the smallest terms of the correct mcmc update gradient, which explains the bias. cd is useful for training shallow models like rbms. these can
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 626
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##mc methods. bengio and delalleau 2009 ( ) showed that cd can be interpreted as discarding the smallest terms of the correct mcmc update gradient, which explains the bias. cd is useful for training shallow models like rbms. these can in turn be stacked to initialize deeper models like dbns or dbms. however, cd does not provide much help for training deeper models directly. this is because it is [UNK] 611
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 626
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 18. confronting the partition function to obtain samples of the hidden units given samples of the visible units. since the hidden units are not included in the data, initializing from training points cannot solve the problem. even if we initialize the visible units from the data, we will still need to burn in a markov chain sampling from the distribution over the hidden units conditioned on those visible samples. the cd algorithm can be thought of as penalizing the model for having a markov chain that changes the input rapidly when the input comes from the data. this means training with cd somewhat resembles autoencoder training. even though cd is more biased than some of the other training methods, it can be useful for pretraining shallow models that will later be stacked. this is because the earliest models in the stack are encouraged to copy more information up to their latent variables, thereby making it available to the later models. this should be thought of more of as an often - exploitable side [UNK] of cd training rather than a principled design advantage. sutskever and tieleman 2010 ( ) showed that the cd update direction is not the gradient of any function. this allows for situations where cd could cycle forever, but in practice this is not a serious problem. a [UNK]
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 627
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##d design advantage. sutskever and tieleman 2010 ( ) showed that the cd update direction is not the gradient of any function. this allows for situations where cd could cycle forever, but in practice this is not a serious problem. a [UNK] strategy that resolves many of the problems with cd is to initial - ize the markov chains at each gradient step with their states from the previous gradient step. this approach was first discovered under the name stochastic max - imum likelihood ( sml ) in the applied mathematics and statistics community ( younes 1998, ) and later independently rediscovered under the name persistent contrastive divergence ( pcd, or pcd - k to indicate the use of k gibbs steps per update ) in the deep learning community (, ). see algorithm. tieleman 2008 18. 3 the basic idea of this approach is that, so long as the steps taken by the stochastic gradient algorithm are small, then the model from the previous step will be similar to the model from the current step. it follows that the samples from the previous model ’ s distribution will be very close to being fair samples from the current model ’ s distribution, so a markov chain initialized with these samples will not require much time to mix
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 627
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
from the current step. it follows that the samples from the previous model ’ s distribution will be very close to being fair samples from the current model ’ s distribution, so a markov chain initialized with these samples will not require much time to mix. because each markov chain is continually updated throughout the learning process, rather than restarted at each gradient step, the chains are free to wander far enough to find all of the model ’ s modes. sml is thus considerably more resistant to forming models with spurious modes than cd is. moreover, because it is possible to store the state of all of the sampled variables, whether visible or latent, sml provides an initialization point for both the hidden and visible units. cd is only able to provide an initialization for the visible units, and therefore requires burn - in for deep models. sml is able to train deep models [UNK]. 612
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 627
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 18. confronting the partition function marlin 2010 et al. ( ) compared sml to many of the other criteria presented in this chapter. they found that sml results in the best test set log - likelihood for an rbm, and that if the rbm ’ s hidden units are used as features for an svm classifier, sml results in the best classification accuracy. sml is vulnerable to becoming inaccurate if the stochastic gradient algorithm can move the model faster than the markov chain can mix between steps. this can happen if k is too small or is too large. the permissible range of values is unfortunately highly problem - dependent. there is no known way to test formally whether the chain is successfully mixing between steps. subjectively, if the learning rate is too high for the number of gibbs steps, the human operator will be able to observe that there is much more variance in the negative phase samples across gradient steps rather than across [UNK] markov chains. for example, a model trained on mnist might sample exclusively 7s on one step. the learning process will then push down strongly on the mode corresponding to 7s, and the model might sample exclusively 9s on the next step. algorithm 18. 3 the stoch
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 628
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
model trained on mnist might sample exclusively 7s on one step. the learning process will then push down strongly on the mode corresponding to 7s, and the model might sample exclusively 9s on the next step. algorithm 18. 3 the stochastic maximum likelihood / persistent contrastive divergence algorithm using gradient ascent as the optimization procedure. set, the step size, to a small positive number. set k, the number of gibbs steps, high enough to allow a markov chain sampling from p ( x ; θ + g ) to burn in, starting from samples from p ( x ; θ ). perhaps 1 for rbm on a small image patch, or 5 - 50 for a more complicated model like a dbm. initialize a set of m samples { [UNK] ( 1 ),..., [UNK] ( ) m } to random values ( e. g., from a uniform or normal distribution, or possibly a distribution with marginals matched to the model ’ s marginals ). while not converged do sample a minibatch of examples m { x ( 1 ),..., x ( ) m } from the training set. g ← 1 m m i = 1 ∇θ log [UNK] ( x ( ) i ; ) θ.
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 628
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
do sample a minibatch of examples m { x ( 1 ),..., x ( ) m } from the training set. g ← 1 m m i = 1 ∇θ log [UNK] ( x ( ) i ; ) θ. for do i k = 1 to for do j m = 1 to [UNK] ( ) j ←gibbs _ update ( [UNK] ( ) j ). end for end for g g ← − 1 m m i = 1 ∇θ log [UNK] ( [UNK] ( ) i ; ) θ. θ θ ← +. g end while care must be taken when evaluating the samples from a model trained with sml. it is necessary to draw the samples starting from a fresh markov chain 613
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 628
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 18. confronting the partition function initialized from a random starting point after the model is done training. the samples present in the persistent negative chains used for training have been influenced by several recent versions of the model, and thus can make the model appear to have greater capacity than it actually does. berglund and raiko 2013 ( ) performed experiments to examine the bias and variance in the estimate of the gradient provided by cd and sml. cd proves to have lower variance than the estimator based on exact sampling. sml has higher variance. the cause of cd ’ s low variance is its use of the same training points in both the positive and negative phase. if the negative phase is initialized from [UNK] training points, the variance rises above that of the estimator based on exact sampling. all of these methods based on using mcmc to draw samples from the model can in principle be used with almost any variant of mcmc. this means that techniques such as sml can be improved by using any of the enhanced mcmc techniques described in chapter, such as parallel tempering (, 17 desjardins et al. 2010 cho 2010 ; et al., ). one approach to accelerating mixing during learning relies not on changing the monte carlo sampling
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 629
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
of the enhanced mcmc techniques described in chapter, such as parallel tempering (, 17 desjardins et al. 2010 cho 2010 ; et al., ). one approach to accelerating mixing during learning relies not on changing the monte carlo sampling technology but rather on changing the parametrization of the model and the cost function. fast pcd or fpcd (, tieleman and hinton 2009 ) involves replacing the parameters θ of a traditional model with an expression θ θ = ( ) slow + θ ( ) fast. ( 18. 16 ) there are now twice as many parameters as before, and they are added together element - wise to provide the parameters used by the original model definition. the fast copy of the parameters is trained with a much larger learning rate, allowing it to adapt rapidly in response to the negative phase of learning and push the markov chain to new territory. this forces the markov chain to mix rapidly, though this [UNK] only occurs during learning while the fast weights are free to change. typically one also applies significant weight decay to the fast weights, encouraging them to converge to small values, after only transiently taking on large values long enough to encourage the markov chain to change modes. one key ben
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 629
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
to change. typically one also applies significant weight decay to the fast weights, encouraging them to converge to small values, after only transiently taking on large values long enough to encourage the markov chain to change modes. one key benefit to the mcmc - based methods described in this section is that they provide an estimate of the gradient of log z, and thus we can essentially decompose the problem into the log [UNK] contribution and the log z contribution. we can then use any other method to tackle log [UNK] ( x ), and just add our negative phase gradient onto the other method ’ s gradient. in particular, this means that our positive phase can make use of methods that provide only a lower bound on [UNK]. most of the other methods of dealing with log z presented in this chapter are 614
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 629
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 18. confronting the partition function incompatible with bound - based positive phase methods. 18. 3 pseudolikelihood monte carlo approximations to the partition function and its gradient directly confront the partition function. other approaches sidestep the issue, by training the model without computing the partition function. most of these approaches are based on the observation that it is easy to compute ratios of probabilities in an undirected probabilistic model. this is because the partition function appears in both the numerator and the denominator of the ratio and cancels out : p ( ) x p ( ) y = 1 z [UNK] ( ) x 1 z [UNK] ( ) y = [UNK] ( ) x [UNK] ( ) y. ( 18. 17 ) the pseudolikelihood is based on the observation that conditional probabilities take this ratio - based form, and thus can be computed without knowledge of the partition function. suppose that we partition x into a, b and c, where a contains the variables we want to find the conditional distribution over, b contains the variables we want to condition on, and c contains the variables that are not part of our query. p ( ) = a b | p, ( a b ) p ( ) b = p, (
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 630
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##nd the conditional distribution over, b contains the variables we want to condition on, and c contains the variables that are not part of our query. p ( ) = a b | p, ( a b ) p ( ) b = p, ( a b ) a c, p,, ( a b c ) = [UNK], ( a b ) a c, [UNK],, ( a b c ). ( 18. 18 ) this quantity requires marginalizing out a, which can be a very [UNK] operation provided that a and c do not contain very many variables. in the extreme case, a can be a single variable and c can be empty, making this operation require only as many evaluations of [UNK] as there are values of a single random variable. unfortunately, in order to compute the log - likelihood, we need to marginalize out large sets of variables. if there are n variables total, we must marginalize a set of size. by the chain rule of probability, n −1 log ( ) = log ( p x p x1 ) + log ( p x2 | x1 ) + + ( · · · p xn | x1 : 1 n− ). ( 18. 19 ) in this case, we have made a maximally
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 630
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
( p x p x1 ) + log ( p x2 | x1 ) + + ( · · · p xn | x1 : 1 n− ). ( 18. 19 ) in this case, we have made a maximally small, but c can be as large as x2 : n. what if we simply move c into b to reduce the computational cost? this yields the pseudolikelihood (, ) objective function, based on predicting the value besag 1975 of feature xi given all of the other features x−i : n i = 1 log ( p xi | x−i ). ( 18. 20 ) 615
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 630
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 18. confronting the partition function if each random variable has k [UNK] values, this requires onlyk n × evaluations of [UNK] to compute, as opposed to the kn evaluations needed to compute the partition function. this may look like an unprincipled hack, but it can be proven that estimation by maximizing the pseudolikelihood is asymptotically consistent (, ). mase 1995 of course, in the case of datasets that do not approach the large sample limit, pseudolikelihood may display [UNK] behavior from the maximum likelihood estimator. it is possible to trade computational complexity for deviation from maximum likelihood behavior by using the generalized pseudolikelihood estimator ( huang and ogata 2002, ). the generalized pseudolikelihood estimator uses m [UNK] sets s ( ) i, i = 1,..., m of indices of variables that appear together on the left side of the conditioning bar. in the extreme case of m = 1 and s ( 1 ) = 1,..., n the generalized pseudolikelihood recovers the log - likelihood. in the extreme case of m = n and s ( ) i = { } i, the generalized pseudolikelihood recovers the
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 631
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
1 ) = 1,..., n the generalized pseudolikelihood recovers the log - likelihood. in the extreme case of m = n and s ( ) i = { } i, the generalized pseudolikelihood recovers the pseudolikelihood. the generalized pseudolikelihood objective function is given by m i = 1 log ( p xs ( ) i | x−s ( ) i ). ( 18. 21 ) the performance of pseudolikelihood - based approaches depends largely on how the model will be used. pseudolikelihood tends to perform poorly on tasks that require a good model of the full joint p ( x ), such as density estimation and sampling. however, it can perform better than maximum likelihood for tasks that require only the conditional distributions used during training, such as filling in small amounts of missing values. generalized pseudolikelihood techniques are especially powerful if the data has regular structure that allows the s index sets to be designed to capture the most important correlations while leaving out groups of variables that only have negligible correlation. for example, in natural images, pixels that are widely separated in space also have weak correlation, so the generalized pseudolikelihood can be applied with each set being a small,
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 631
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
while leaving out groups of variables that only have negligible correlation. for example, in natural images, pixels that are widely separated in space also have weak correlation, so the generalized pseudolikelihood can be applied with each set being a small, spatially localized window. s one weakness of the pseudolikelihood estimator is that it cannot be used with other approximations that provide only a lower bound on [UNK] ( x ), such as variational inference, which will be covered in chapter. this is because 19 [UNK] appears in the denominator. a lower bound on the denominator provides only an upper bound on the expression as a whole, and there is no benefit to maximizing an upper bound. this makes it [UNK] to apply pseudolikelihood approaches to deep models such as deep boltzmann machines, since variational methods are one of the dominant approaches to approximately marginalizing out the many layers of hidden variables 616
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 631
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 18. confronting the partition function that interact with each other. however, pseudolikelihood is still useful for deep learning, because it can be used to train single layer models, or deep models using approximate inference methods that are not based on lower bounds. pseudolikelihood has a much greater cost per gradient step than sml, due to its explicit computation of all of the conditionals. however, generalized pseudo - likelihood and similar criteria can still perform well if only one randomly selected conditional is computed per example ( goodfellow 2013b et al., ), thereby bringing the computational cost down to match that of sml. though the pseudolikelihood estimator does not explicitly minimize log z, it can still be thought of as having something resembling a negative phase. the denominators of each conditional distribution result in the learning algorithm suppressing the probability of all states that have only one variable [UNK] from a training example. see marlin and de freitas 2011 ( ) for a theoretical analysis of the asymptotic [UNK] of pseudolikelihood. 18. 4 score matching and ratio matching score matching (, ) provides another consistent means of training a hyvarinen 2005 model without estimating z or its derivatives. the name score matching
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 632
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
the asymptotic [UNK] of pseudolikelihood. 18. 4 score matching and ratio matching score matching (, ) provides another consistent means of training a hyvarinen 2005 model without estimating z or its derivatives. the name score matching comes from terminology in which the derivatives of a log density with respect to its argument, ∇x log p ( x ), are called its score. the strategy used by score matching is to minimize the expected squared [UNK] between the derivatives of the model ’ s log density with respect to the input and the derivatives of the data ’ s log density with respect to the input : l, ( x θ ) = 1 2 | | ∇x log pmodel ( ; ) x θ −∇x log p data ( ) x | | 2 2 ( 18. 22 ) j ( ) = θ 1 2epdata ( ) x l, ( x θ ) ( 18. 23 ) θ ∗ = min θ j ( ) θ ( 18. 24 ) this objective function avoids the [UNK] associated with [UNK] the partition function z because z is not a function of x and therefore ∇xz = 0. initially, score matching appears to have a new [UNK] : computing the score of the data distribution requires knowledge of the true distribution generating
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 632
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
the [UNK] associated with [UNK] the partition function z because z is not a function of x and therefore ∇xz = 0. initially, score matching appears to have a new [UNK] : computing the score of the data distribution requires knowledge of the true distribution generating the training data, pdata. fortunately, minimizing the expected value of is l, ( x θ ) 617
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 632
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 18. confronting the partition function equivalent to minimizing the expected value of [UNK], ( x θ ) = n j = 1 ∂2 ∂x2 j log pmodel ( ; ) + x θ 1 2 ∂ ∂xj log pmodel ( ; ) x θ 2 ( 18. 25 ) where is the dimensionality of. n x because score matching requires taking derivatives with respect to x, it is not applicable to models of discrete data. however, the latent variables in the model may be discrete. like the pseudolikelihood, score matching only works when we are able to evaluate log [UNK] ( x ) and its derivatives directly. it is not compatible with methods that only provide a lower bound on log [UNK] ( x ), because score matching requires the derivatives and second derivatives of log [UNK] ( x ) and a lower bound conveys no information about its derivatives. this means that score matching cannot be applied to estimating models with complicated interactions between the hidden units, such as sparse coding models or deep boltzmann machines. while score matching can be used to pretrain the first hidden layer of a larger model, it has not been applied as a pretraining strategy for the deeper layers of a larger model. this is probably because the
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 633
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
deep boltzmann machines. while score matching can be used to pretrain the first hidden layer of a larger model, it has not been applied as a pretraining strategy for the deeper layers of a larger model. this is probably because the hidden layers of such models usually contain some discrete variables. while score matching does not explicitly have a negative phase, it can be viewed as a version of contrastive divergence using a specific kind of markov chain (, ). the markov chain in this case is not gibbs sampling, but hyvarinen 2007a rather a [UNK] approach that makes local moves guided by the gradient. score matching is equivalent to cd with this type of markov chain when the size of the local moves approaches zero. lyu 2009 ( ) generalized score matching to the discrete case ( but made an error in their derivation that was corrected by ( ) ). marlin et al. 2010 marlin et al. ( ) found that 2010 generalized score matching ( gsm ) does not work in high dimensional discrete spaces where the observed probability of many events is 0. a more successful approach to extending the basic ideas of score matching to discrete data is ratio matching (, ). ratio matching applies hyvarinen 2007b speci
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 633
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
not work in high dimensional discrete spaces where the observed probability of many events is 0. a more successful approach to extending the basic ideas of score matching to discrete data is ratio matching (, ). ratio matching applies hyvarinen 2007b specifically to binary data. ratio matching consists of minimizing the average over examples of the following objective function : l ( ) rm ( ) = x θ, n j = 1 1 1 + pmodel ( ; ) x θ pmodel ( ( ) ) ; ) f x, j θ 2, ( 18. 26 ) 618
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 633
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 18. confronting the partition function where returns with the bit at position flipped. ratio matching avoids f, j ( x ) x j the partition function using the same trick as the pseudolikelihood estimator : in a ratio of two probabilities, the partition function cancels out. ( ) marlin et al. 2010 found that ratio matching outperforms sml, pseudolikelihood and gsm in terms of the ability of models trained with ratio matching to denoise test set images. like the pseudolikelihood estimator, ratio matching requires n evaluations of [UNK] per data point, making its computational cost per update roughly n times higher than that of sml. as with the pseudolikelihood estimator, ratio matching can be thought of as pushing down on all fantasy states that have only one variable [UNK] from a training example. since ratio matching applies specifically to binary data, this means that it acts on all fantasy states within hamming distance 1 of the data. ratio matching can also be useful as the basis for dealing with high - dimensional sparse data, such as word count vectors. this kind of data poses a challenge for mcmc - based methods because the data is extremely expensive to represent in dense
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 634
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
the data. ratio matching can also be useful as the basis for dealing with high - dimensional sparse data, such as word count vectors. this kind of data poses a challenge for mcmc - based methods because the data is extremely expensive to represent in dense format, yet the mcmc sampler does not yield sparse values until the model has learned to represent the sparsity in the data distribution. dauphin and bengio ( ) overcame this issue by designing an unbiased stochastic approximation to 2013 ratio matching. the approximation evaluates only a randomly selected subset of the terms of the objective, and does not require the model to generate complete fantasy samples. see marlin and de freitas 2011 ( ) for a theoretical analysis of the asymptotic [UNK] of ratio matching. 18. 5 denoising score matching in some cases we may wish to regularize score matching, by fitting a distribution psmoothed ( ) = x p data ( ) ( ) y q x y | dy ( 18. 27 ) rather than the true pdata. the distribution q ( x y | ) is a corruption process, usually one that forms by adding a small amount of noise to. x y denoising score matching is especially useful because in practice we usually
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 634
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
27 ) rather than the true pdata. the distribution q ( x y | ) is a corruption process, usually one that forms by adding a small amount of noise to. x y denoising score matching is especially useful because in practice we usually do not have access to the true pdata but rather only an empirical distribution defined by samples from it. any consistent estimator will, given enough capacity, make pmodel into a set of dirac distributions centered on the training points. smoothing by q helps to reduce this problem, at the loss of the asymptotic consistency property 619
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 634
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 18. confronting the partition function described in section. ( ) introduced a procedure for 5. 4. 5 kingma and lecun 2010 performing regularized score matching with the smoothing distribution q being normally distributed noise. recall from section that several autoencoder training algorithms are 14. 5. 1 equivalent to score matching or denoising score matching. these autoencoder training algorithms are therefore a way of overcoming the partition function problem. 18. 6 noise - contrastive estimation most techniques for estimating models with intractable partition functions do not provide an estimate of the partition function. sml and cd estimate only the gradient of the log partition function, rather than the partition function itself. score matching and pseudolikelihood avoid computing quantities related to the partition function altogether. noise - contrastive estimation ( nce ) ( gutmann and hyvarinen 2010, ) takes a [UNK] strategy. in this approach, the probability distribution estimated by the model is represented explicitly as log pmodel ( ) = log [UNK] x pmodel ( ; ) + x θ c, ( 18. 28 ) where c is explicitly introduced as an approximation of −log z ( θ ). rather than estimating only θ, the noise contrastive estimation procedure treats c as just
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 635
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
x pmodel ( ; ) + x θ c, ( 18. 28 ) where c is explicitly introduced as an approximation of −log z ( θ ). rather than estimating only θ, the noise contrastive estimation procedure treats c as just another parameter and estimates θ and c simultaneously, using the same algorithm for both. the resulting log pmodel ( x ) thus may not correspond exactly to a valid probability distribution, but will become closer and closer to being valid as the estimate of improves. c 1 such an approach would not be possible using maximum likelihood as the criterion for the estimator. the maximum likelihood criterion would choose to set c c arbitrarily high, rather than setting to create a valid probability distribution. nce works by reducing the unsupervised learning problem of estimating p ( x ) to that of learning a probabilistic binary classifier in which one of the categories corresponds to the data generated by the model. this supervised learning problem is constructed in such a way that maximum likelihood estimation in this supervised 1nce is also applicable to problems with a tractable partition function, where there is no need to introduce the extra parameter c. however, it has generated the most interest as a means of estimating models with
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 635
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
way that maximum likelihood estimation in this supervised 1nce is also applicable to problems with a tractable partition function, where there is no need to introduce the extra parameter c. however, it has generated the most interest as a means of estimating models with [UNK] partition functions. 620
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 635
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 18. confronting the partition function learning problem defines an asymptotically consistent estimator of the original problem. specifically, we introduce a second distribution, the noise distribution pnoise ( x ). the noise distribution should be tractable to evaluate and to sample from. we can now construct a model over both x and a new, binary class variable y. in the new joint model, we specify that pjoint ( = 1 ) = y 1 2, ( 18. 29 ) pjoint ( = 1 ) = x | y pmodel ( ) x, ( 18. 30 ) and pjoint ( = 0 ) = x | y pnoise ( ) x. ( 18. 31 ) in other words, y is a switch variable that determines whether we will generate x from the model or from the noise distribution. we can construct a similar joint model of training data. in this case, the switch variable determines whether we draw x from the data or from the noise distribution. formally, ptrain ( y = 1 ) = 1 2, ptrain ( x | y = 1 ) = pdata ( x ), and ptrain ( = 0 ) = x | y pnoise ( )
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 636
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
from the noise distribution. formally, ptrain ( y = 1 ) = 1 2, ptrain ( x | y = 1 ) = pdata ( x ), and ptrain ( = 0 ) = x | y pnoise ( ) x. we can now just use standard maximum likelihood learning on the supervised learning problem of fitting pjoint to ptrain : θ, c = arg max θ, c ex, p [UNK] log pjoint ( ) y | x. ( 18. 32 ) the distribution pjoint is essentially a logistic regression model applied to the [UNK] in log probabilities of the model and the noise distribution : pjoint ( = 1 ) = y | x pmodel ( ) x p model ( ) + x pnoise ( ) x ( 18. 33 ) = 1 1 + pnoise ( ) x pmodel ( ) x ( 18. 34 ) = 1 1 + exp log pnoise ( ) x pmodel ( ) x ( 18. 35 ) = σ −log pnoise ( ) x pmodel ( ) x ( 18. 36 ) = ( log σ pmodel ( ) log x − pnoise ( ) ) x. ( 18
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 636
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
) x ( 18. 35 ) = σ −log pnoise ( ) x pmodel ( ) x ( 18. 36 ) = ( log σ pmodel ( ) log x − pnoise ( ) ) x. ( 18. 37 ) 621
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 636
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 18. confronting the partition function nce is thus simple to apply so long as log [UNK] is easy to back - propagate through, and, as specified above, pnoise is easy to evaluate ( in order to evaluate pjoint ) and sample from ( in order to generate the training data ). nce is most successful when applied to problems with few random variables, but can work well even if those random variables can take on a high number of values. for example, it has been successfully applied to modeling the conditional distribution over a word given the context of the word ( mnih and kavukcuoglu, 2013 ). though the word may be drawn from a large vocabulary, there is only one word. when nce is applied to problems with many random variables, it becomes less [UNK]. the logistic regression classifier can reject a noise sample by identifying any one variable whose value is unlikely. this means that learning slows down greatly after pmodel has learned the basic marginal statistics. imagine learning a model of images of faces, using unstructured gaussian noise as pnoise. if pmodel learns about eyes, it can reject almost all unstructured noise samples without having learned anything about
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 637
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
the basic marginal statistics. imagine learning a model of images of faces, using unstructured gaussian noise as pnoise. if pmodel learns about eyes, it can reject almost all unstructured noise samples without having learned anything about other facial features, such as mouths. the constraint that pnoise must be easy to evaluate and easy to sample from can be overly restrictive. when pnoise is simple, most samples are likely to be too obviously distinct from the data to force pmodel to improve noticeably. like score matching and pseudolikelihood, nce does not work if only a lower bound on [UNK] is available. such a lower bound could be used to construct a lower bound on pjoint ( y = 1 | x ), but it can only be used to construct an upper bound on pjoint ( y = 0 | x ), which appears in half the terms of the nce objective. likewise, a lower bound on p noise is not useful, because it provides only an upper bound on pjoint ( = 1 ) y | x. when the model distribution is copied to define a new noise distribution before each gradient step, nce defines a procedure called self - contrastive estimation, whose expected
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 637
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
an upper bound on pjoint ( = 1 ) y | x. when the model distribution is copied to define a new noise distribution before each gradient step, nce defines a procedure called self - contrastive estimation, whose expected gradient is equivalent to the expected gradient of maximum likelihood (, ). the special case of nce where the noise samples goodfellow 2014 are those generated by the model suggests that maximum likelihood can be interpreted as a procedure that forces a model to constantly learn to distinguish reality from its own evolving beliefs, while noise contrastive estimation achieves some reduced computational cost by only forcing the model to distinguish reality from a fixed baseline ( the noise model ). using the supervised task of classifying between training samples and generated samples ( with the model energy function used in defining the classifier ) to provide a gradient on the model was introduced earlier in various forms ( welling et al., 2003b bengio 2009 ;, ). 622
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 637
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 18. confronting the partition function noise contrastive estimation is based on the idea that a good generative model should be able to distinguish data from noise. a closely related idea is that a good generative model should be able to generate samples that no classifier can distinguish from data. this idea yields generative adversarial networks ( section ). 20. 10. 4 18. 7 estimating the partition function while much of this chapter is dedicated to describing methods that avoid needing to compute the intractable partition function z ( θ ) associated with an undirected graphical model, in this section we discuss several methods for directly estimating the partition function. estimating the partition function can be important because we require it if we wish to compute the normalized likelihood of data. this is often important in evaluating the model, monitoring training performance, and comparing models to each other. for example, imagine we have two models : model ma defining a probabil - ity distribution pa ( x ; θa ) = 1 za [UNK] ( x ; θa ) and model mb defining a probability distribution pb ( x ; θb ) = 1 zb [UNK] ( x ; θb ). a common way to compare the models is to evaluate
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 638
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
) = 1 za [UNK] ( x ; θa ) and model mb defining a probability distribution pb ( x ; θb ) = 1 zb [UNK] ( x ; θb ). a common way to compare the models is to evaluate and compare the likelihood that both models assign to an i. i. d. test dataset. suppose the test set consists of m examples { x ( 1 ),..., x ( ) m }. if i pa ( x ( ) i ; θa ) > i pb ( x ( ) i ; θb ) or equivalently if i log pa ( x ( ) i ; θa ) − i log pb ( x ( ) i ; θb ) 0 >, ( 18. 38 ) then we say that ma is a better model than mb ( or, at least, it is a better model of the test set ), in the sense that it has a better test log - likelihood. unfortunately, testing whether this condition holds requires knowledge of the partition function. unfortunately, equation seems to require evaluating the log probability that 18. 38 the model assigns to each point, which in turn requires evaluating the partition function. we can simplify the situation slightly by re - arranging equation 18.
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 638
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
knowledge of the partition function. unfortunately, equation seems to require evaluating the log probability that 18. 38 the model assigns to each point, which in turn requires evaluating the partition function. we can simplify the situation slightly by re - arranging equation 18. 38 into a form where we need to know only the ratio of the two model ’ s partition functions : i log pa ( x ( ) i ; θa ) − i log pb ( x ( ) i ; θb ) = i log [UNK] ( x ( ) i ; θa ) [UNK] ( x ( ) i ; θb ) −m log z ( θa ) z ( θb ). ( 18. 39 ) 623
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 638
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.