text
stringlengths 35
1.54k
| source
stringclasses 1
value | page
int64 1
800
| book
stringclasses 1
value | chunk_index
int64 0
0
|
|---|---|---|---|---|
##tive learning algorithms, but they are still deservedly recognized for their important role in deep learning history. deep belief networks are generative models with several layers of latent variables. the latent variables are typically binary, while the visible units may be binary or real. there are no intralayer connections. usually, every unit in each layer is connected to every unit in each neighboring layer, though it is possible to construct more sparsely connected dbns. the connections between the top two layers are undirected. the connections between all other layers are directed, with the arrows pointed toward the layer that is closest to the data. see figure b for an example. 20. 1 a dbn with l hidden layers contains l weight matrices : w ( 1 ),..., w ( ) l. it also contains l + 1 bias vectors : b ( 0 ),..., b ( ) l, with b ( 0 ) providing the biases for the visible layer. the probability distribution represented by the dbn is given by p ( h ( ) l, h ( 1 ) l− ) exp [UNK] b ( ) l h ( ) l + b ( 1 ) l− h ( 1 ) l− + h ( 1 ) l− w
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 675
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##n is given by p ( h ( ) l, h ( 1 ) l− ) exp [UNK] b ( ) l h ( ) l + b ( 1 ) l− h ( 1 ) l− + h ( 1 ) l− w ( ) l h ( ) l, ( 20. 17 ) p h ( ( ) k i = 1 | h ( + 1 ) k ) = σ b ( ) k i + w ( + 1 ) k :, i h ( + 1 ) k [UNK] ∈ − i, k 1,..., l 2, ( 20. 18 ) p v ( i = 1 | h ( 1 ) ) = σ b ( 0 ) i + w ( 1 ) :, i h ( 1 ) [UNK]. ( 20. 19 ) in the case of real - valued visible units, substitute v [UNK] v b ; ( 0 ) + w ( 1 ) h ( 1 ), β −1 ( 20. 20 ) 660
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 675
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 20. deep generative models with β diagonal for tractability. generalizations to other exponential family visible units are straightforward, at least in theory. a dbn with only one hidden layer is just an rbm. to generate a sample from a dbn, we first run several steps of gibbs sampling on the top two hidden layers. this stage is essentially drawing a sample from the rbm defined by the top two hidden layers. we can then use a single pass of ancestral sampling through the rest of the model to draw a sample from the visible units. deep belief networks incur many of the problems associated with both directed models and undirected models. inference in a deep belief network is intractable due to the explaining away [UNK] within each directed layer, and due to the interaction between the two hidden layers that have undirected connections. evaluating or maximizing the standard evidence lower bound on the log - likelihood is also intractable, because the evidence lower bound takes the expectation of cliques whose size is equal to the network width. evaluating or maximizing the log - likelihood requires not just confronting the problem of intractable inference to marginalize out the latent variables, but also the problem of an intractable partition function within the undirected model of the
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 676
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
to the network width. evaluating or maximizing the log - likelihood requires not just confronting the problem of intractable inference to marginalize out the latent variables, but also the problem of an intractable partition function within the undirected model of the top two layers. to train a deep belief network, one begins by training an rbm to maximize [UNK] log p ( v ) using contrastive divergence or stochastic maximum likelihood. the parameters of the rbm then define the parameters of the first layer of the dbn. next, a second rbm is trained to approximately maximize [UNK] ( 1 ) [UNK] ( 1 ) ( h ( 1 ) | v ) log p ( 2 ) ( h ( 1 ) ) ( 20. 21 ) where p ( 1 ) is the probability distribution represented by the first rbm and p ( 2 ) is the probability distribution represented by the second rbm. in other words, the second rbm is trained to model the distribution defined by sampling the hidden units of the first rbm, when the first rbm is driven by the data. this procedure can be repeated indefinitely, to add as many layers to the dbn as desired, with each new rbm modeling
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 676
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
units of the first rbm, when the first rbm is driven by the data. this procedure can be repeated indefinitely, to add as many layers to the dbn as desired, with each new rbm modeling the samples of the previous one. each rbm defines another layer of the dbn. this procedure can be justified as increasing a variational lower bound on the log - likelihood of the data under the dbn ( hinton et al., ). 2006 in most applications, no [UNK] is made to jointly train the dbn after the greedy layer - wise procedure is complete. however, it is possible to perform generative fine - tuning using the wake - sleep algorithm. 661
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 676
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 20. deep generative models the trained dbn may be used directly as a generative model, but most of the interest in dbns arose from their ability to improve classification models. we can take the weights from the dbn and use them to define an mlp : h ( 1 ) = σ b ( 1 ) + v w ( 1 ). ( 20. 22 ) h ( ) l = σ b ( ) l i + h ( 1 ) l− w ( ) l [UNK] l 2,..., m, ( 20. 23 ) after initializing this mlp with the weights and biases learned via generative training of the dbn, we may train the mlp to perform a classification task. this additional training of the mlp is an example of discriminative fine - tuning. this specific choice of mlp is somewhat arbitrary, compared to many of the inference equations in chapter that are derived from first principles. this mlp 19 is a heuristic choice that seems to work well in practice and is used consistently in the literature. many approximate inference techniques are motivated by their ability to find a maximally variational lower bound on the log - likelihood
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 677
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
. this mlp 19 is a heuristic choice that seems to work well in practice and is used consistently in the literature. many approximate inference techniques are motivated by their ability to find a maximally variational lower bound on the log - likelihood tight under some set of constraints. one can construct a variational lower bound on the log - likelihood using the hidden unit expectations defined by the dbn ’ s mlp, but this is true of probability distribution over the hidden units, and there is no any reason to believe that this mlp provides a particularly tight bound. in particular, the mlp ignores many important interactions in the dbn graphical model. the mlp propagates information upward from the visible units to the deepest hidden units, but does not propagate any information downward or sideways. the dbn graphical model has explaining away interactions between all of the hidden units within the same layer as well as top - down interactions between layers. while the log - likelihood of a dbn is intractable, it may be approximated with ais ( salakhutdinov and murray 2008, ). this permits evaluating its quality as a generative model. the term “ deep belief network ” is commonly used incorrectly to refer to any kind of deep neural
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 677
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
it may be approximated with ais ( salakhutdinov and murray 2008, ). this permits evaluating its quality as a generative model. the term “ deep belief network ” is commonly used incorrectly to refer to any kind of deep neural network, even networks without latent variable semantics. the term “ deep belief network ” should refer specifically to models with undirected connections in the deepest layer and directed connections pointing downward between all other pairs of consecutive layers. the term “ deep belief network ” may also cause some confusion because the term “ belief network ” is sometimes used to refer to purely directed models, while deep belief networks contain an undirected layer. deep belief networks also share the acronym dbn with dynamic bayesian networks ( dean and kanazawa 1989, ), which are bayesian networks for representing markov chains. 662
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 677
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 20. deep generative models h ( 1 ) 1 h ( 1 ) 1 h ( 1 ) 2 h ( 1 ) 2 h ( 1 ) 3 h ( 1 ) 3 v1 v1 v2 v2 v3 v3 h ( 2 ) 1 h ( 2 ) 1 h ( 2 ) 2 h ( 2 ) 2 h ( 2 ) 3 h ( 2 ) 3 h ( 1 ) 4 h ( 1 ) 4 figure 20. 2 : the graphical model for a deep boltzmann machine with one visible layer ( bottom ) and two hidden layers. connections are only between units in neighboring layers. there are no intralayer layer connections. 20. 4 deep boltzmann machines a deep boltzmann machine or dbm ( salakhutdinov and hinton 2009a, ) is another kind of deep, generative model. unlike the deep belief network ( dbn ), it is an entirely undirected model. unlike the rbm, the dbm has several layers of latent variables ( rbms have just one ). but like the rbm, within each layer, each of the variables are mutually independent, conditioned on the variables in the neighboring layers. see figure for the graph structure. deep boltzmann 20
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 678
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
variables ( rbms have just one ). but like the rbm, within each layer, each of the variables are mutually independent, conditioned on the variables in the neighboring layers. see figure for the graph structure. deep boltzmann 20. 2 machines have been applied to a variety of tasks including document modeling ( srivastava 2013 et al., ). like rbms and dbns, dbms typically contain only binary units — as we assume for simplicity of our presentation of the model — but it is straightforward to include real - valued visible units. a dbm is an energy - based model, meaning that the the joint probability distribution over the model variables is parametrized by an energy function e. in the case of a deep boltzmann machine with one visible layer, v, and three hidden layers, h ( 1 ), h ( 2 ) and h ( 3 ), the joint probability is given by : p v h, ( 1 ), h ( 2 ), h ( 3 ) = 1 z ( ) θ exp −e, ( v h ( 1 ), h ( 2 ), h ( 3 ) ; ) θ. ( 20. 24 ) to simplify our presentation, we omit the bias parameters below
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 678
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
) = 1 z ( ) θ exp −e, ( v h ( 1 ), h ( 2 ), h ( 3 ) ; ) θ. ( 20. 24 ) to simplify our presentation, we omit the bias parameters below. the dbm energy function is then defined as follows : e, ( v h ( 1 ), h ( 2 ), h ( 3 ) ; ) = θ −vw ( 1 ) h ( 1 ) −h ( 1 ) w ( 2 ) h ( 2 ) −h ( 2 ) w ( 3 ) h ( 3 ). ( 20. 25 ) 663
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 678
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 20. deep generative models h ( 1 ) 1 h ( 1 ) 1 h ( 1 ) 2 h ( 1 ) 2 h ( 1 ) 3 h ( 1 ) 3 v1 v1 v2 v2 h ( 2 ) 1 h ( 2 ) 1 h ( 2 ) 2 h ( 2 ) 2 h ( 2 ) 3 h ( 2 ) 3 h ( 3 ) 1 h ( 3 ) 1 h ( 3 ) 2 h ( 3 ) 2 v1 v2 h ( 2 ) 1 h ( 2 ) 1 h ( 2 ) 2 h ( 2 ) 2 h ( 2 ) 3 h ( 2 ) 3 h ( 1 ) 1 h ( 1 ) 1 h ( 1 ) 2 h ( 1 ) 2 h ( 1 ) 3 h ( 1 ) 3 h ( 3 ) 1 h ( 3 ) 1 h ( 3 ) 2 h ( 3 ) 2 figure 20. 3 : a deep boltzmann machine, re - arranged to reveal its bipartite graph structure. in comparison to the rbm energy function ( equation ), the dbm energy 20. 5 function includes connections between the hidden units ( latent variables ) in the form of the weight matrices ( w ( 2 ) and w ( 3 ) ). as we will
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 679
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
the rbm energy function ( equation ), the dbm energy 20. 5 function includes connections between the hidden units ( latent variables ) in the form of the weight matrices ( w ( 2 ) and w ( 3 ) ). as we will see, these connections have significant consequences for both the model behavior as well as how we go about performing inference in the model. in comparison to fully connected boltzmann machines ( with every unit con - nected to every other unit ), the dbm [UNK] some advantages that are similar to those [UNK] by the rbm. specifically, as illustrated in figure, the dbm 20. 3 layers can be organized into a bipartite graph, with odd layers on one side and even layers on the other. this immediately implies that when we condition on the variables in the even layer, the variables in the odd layers become conditionally independent. of course, when we condition on the variables in the odd layers, the variables in the even layers also become conditionally independent. the bipartite structure of the dbm means that we can apply the same equa - tions we have previously used for the conditional distributions of an rbm to determine the conditional distributions in a dbm
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 679
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
even layers also become conditionally independent. the bipartite structure of the dbm means that we can apply the same equa - tions we have previously used for the conditional distributions of an rbm to determine the conditional distributions in a dbm. the units within a layer are conditionally independent from each other given the values of the neighboring layers, so the distributions over binary variables can be fully described by the bernoulli parameters giving the probability of each unit being active. in our example with two hidden layers, the activation probabilities are given by : p v ( i = 1 | h ( 1 ) ) = σ w ( 1 ) i, : h ( 1 ), ( 20. 26 ) 664
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 679
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 20. deep generative models p h ( ( 1 ) i = 1 | v h, ( 2 ) ) = σ vw ( 1 ) :, i + w ( 2 ) i, : h ( 2 ) ( 20. 27 ) and p h ( ( 2 ) k = 1 | h ( 1 ) ) = σ h ( 1 ) w ( 2 ) :, k. ( 20. 28 ) the bipartite structure makes gibbs sampling in a deep boltzmann machine [UNK]. the naive approach to gibbs sampling is to update only one variable at a time. rbms allow all of the visible units to be updated in one block and all of the hidden units to be updated in a second block. one might naively assume that a dbm with l layers requires l + 1 updates, with each iteration updating a block consisting of one layer of units. instead, it is possible to update all of the units in only two iterations. gibbs sampling can be divided into two blocks of updates, one including all even layers ( including the visible layer ) and the other including all odd layers. due to the bipartite dbm connection pattern, given the even layers, the distribution over the odd layers is factorial and thus can be sampled
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 680
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
, one including all even layers ( including the visible layer ) and the other including all odd layers. due to the bipartite dbm connection pattern, given the even layers, the distribution over the odd layers is factorial and thus can be sampled simultaneously and independently as a block. likewise, given the odd layers, the even layers can be sampled simultaneously and independently as a block. [UNK] sampling is especially important for training with the stochastic maximum likelihood algorithm. 20. 4. 1 interesting properties deep boltzmann machines have many interesting properties. dbms were developed after dbns. compared to dbns, the posterior distribu - tion p ( h v | ) is simpler for dbms. somewhat counterintuitively, the simplicity of this posterior distribution allows richer approximations of the posterior. in the case of the dbn, we perform classification using a heuristically motivated approximate inference procedure, in which we guess that a reasonable value for the mean field expectation of the hidden units can be provided by an upward pass through the network in an mlp that uses sigmoid activation functions and the same weights as the original dbn. distribution any q ( h ) may be used to obtain a variational lower bound on the
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 680
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
units can be provided by an upward pass through the network in an mlp that uses sigmoid activation functions and the same weights as the original dbn. distribution any q ( h ) may be used to obtain a variational lower bound on the log - likelihood. this heuristic procedure therefore allows us to obtain such a bound. however, the bound is not explicitly optimized in any way, so the bound may be far from tight. in particular, the heuristic estimate of q ignores interactions between hidden units within the same layer as well as the top - down feedback influence of hidden units in deeper layers on hidden units that are closer to the input. because the heuristic mlp - based inference procedure in the dbn is not able to account for these interactions, the resulting q is presumably far 665
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 680
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 20. deep generative models from optimal. in dbms, all of the hidden units within a layer are conditionally independent given the other layers. this lack of intralayer interaction makes it possible to use fixed point equations to actually optimize the variational lower bound and find the true optimal mean field expectations ( to within some numerical tolerance ). the use of proper mean field allows the approximate inference procedure for dbms to capture the influence of top - down feedback interactions. this makes dbms interesting from the point of view of neuroscience, because the human brain is known to use many top - down feedback connections. because of this property, dbms have been used as computational models of real neuroscientific phenomena (, ; series et al. 2010 reichert 2011 et al., ). one unfortunate property of dbms is that sampling from them is relatively [UNK]. dbns only need to use mcmc sampling in their top pair of layers. the other layers are used only at the end of the sampling process, in one [UNK] ancestral sampling pass. to generate a sample from a dbm, it is necessary to use mcmc across all layers, with every layer of the model participating in every markov chain transition
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 681
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
used only at the end of the sampling process, in one [UNK] ancestral sampling pass. to generate a sample from a dbm, it is necessary to use mcmc across all layers, with every layer of the model participating in every markov chain transition. 20. 4. 2 dbm mean field inference the conditional distribution over one dbm layer given the neighboring layers is factorial. in the example of the dbm with two hidden layers, these distributions are p ( v h | ( 1 ) ), p ( h ( 1 ) | v h, ( 2 ) ) and p ( h ( 2 ) | h ( 1 ) ). the distribution over all hidden layers generally does not factorize because of interactions between layers. in the example with two hidden layers, p ( h ( 1 ), h ( 2 ) | v ) does not factorize due due to the interaction weights w ( 2 ) between h ( 1 ) and h ( 2 ) which render these variables mutually dependent. as was the case with the dbn, we are left to seek out methods to approximate the dbm posterior distribution. however, unlike the dbn, the dbm posterior distribution over their hidden units — while complicated — is easy to approximate with a variational approximation ( as discussed
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 681
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
dbn, we are left to seek out methods to approximate the dbm posterior distribution. however, unlike the dbn, the dbm posterior distribution over their hidden units — while complicated — is easy to approximate with a variational approximation ( as discussed in section ), specifically a 19. 4 mean field approximation. the mean field approximation is a simple form of variational inference, where we restrict the approximating distribution to fully factorial distributions. in the context of dbms, the mean field equations capture the bidirectional interactions between layers. in this section we derive the iterative approximate inference procedure originally introduced in salakhutdinov and hinton ( ). 2009a in variational approximations to inference, we approach the task of approxi - 666
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 681
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 20. deep generative models mating a particular target distribution — in our case, the posterior distribution over the hidden units given the visible units — by some reasonably simple family of dis - tributions. in the case of the mean field approximation, the approximating family is the set of distributions where the hidden units are conditionally independent. we now develop the mean field approach for the example with two hidden layers. let q ( h ( 1 ), h ( 2 ) | v ) be the approximation of p ( h ( 1 ), h ( 2 ) | v ). the mean field assumption implies that q ( h ( 1 ), h ( 2 ) | v ) = j q h ( ( 1 ) j | v ) k q h ( ( 2 ) k | v ). ( 20. 29 ) the mean field approximation attempts to find a member of this family of distributions that best fits the true posterior p ( h ( 1 ), h ( 2 ) | v ). importantly, the inference process must be run again to find a [UNK] distribution q every time we use a new value of. v one can conceive of many ways of measuring how well q ( h v | ) fits p (
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 682
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
). importantly, the inference process must be run again to find a [UNK] distribution q every time we use a new value of. v one can conceive of many ways of measuring how well q ( h v | ) fits p ( h v | ). the mean field approach is to minimize kl ( ) = q p h q ( h ( 1 ), h ( 2 ) | v ) log q ( h ( 1 ), h ( 2 ) | v ) p ( h ( 1 ), h ( 2 ) | v ). ( 20. 30 ) in general, we do not have to provide a parametric form of the approximating distribution beyond enforcing the independence assumptions. the variational approximation procedure is generally able to recover a functional form of the approximate distribution. however, in the case of a mean field assumption on binary hidden units ( the case we are developing here ) there is no loss of generality resulting from fixing a parametrization of the model in advance. we parametrize q as a product of bernoulli distributions, that is we associate the probability of each element of h ( 1 ) with a parameter. specifically, for each j, [UNK] ( 1 ) j = q (
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 682
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
we parametrize q as a product of bernoulli distributions, that is we associate the probability of each element of h ( 1 ) with a parameter. specifically, for each j, [UNK] ( 1 ) j = q ( h ( 1 ) j = 1 | v ), where [UNK] ( 1 ) j ∈ [ 0, 1 ] and for each k, [UNK] ( 2 ) k = q ( h ( 2 ) k = 1 | v ), where [UNK] ( 2 ) k ∈ [ 0 1 ],. thus we have the following approximation to the posterior : q ( h ( 1 ), h ( 2 ) | v ) = j q h ( ( 1 ) j | v ) k q h ( ( 2 ) k | v ) ( 20. 31 ) = j ( [UNK] ( 1 ) j ) h ( 1 ) j ( 1 [UNK] ( 1 ) j ) ( 1−h ( 1 ) j ) × k ( [UNK] ( 2 ) k ) h ( 2 ) k ( 1 [UNK] ( 2 ) k ) ( 1−h ( 2 ) k ). ( 20. 32 ) of course, for dbms with more layers the approximate posterior parametrization can be extended in the obvious way, exploiting the
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 682
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
k ( 1 [UNK] ( 2 ) k ) ( 1−h ( 2 ) k ). ( 20. 32 ) of course, for dbms with more layers the approximate posterior parametrization can be extended in the obvious way, exploiting the bipartite structure of the graph 667
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 682
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 20. deep generative models to update all of the even layers simultaneously and then to update all of the odd layers simultaneously, following the same schedule as gibbs sampling. now that we have specified our family of approximating distributions q, it remains to specify a procedure for choosing the member of this family that best fits p. the most straightforward way to do this is to use the mean field equations specified by equation. these equations were derived by solving for where the 19. 56 derivatives of the variational lower bound are zero. they describe in an abstract manner how to optimize the variational lower bound for any model, simply by taking expectations with respect to. q applying these general equations, we obtain the update rules ( again, ignoring bias terms ) : [UNK] ( 1 ) j = σ i viw ( 1 ) i, j + k w ( 2 ) j, [UNK] ( 2 ) k, j [UNK] ( 20. 33 ) [UNK] ( 2 ) k = σ j w ( 2 ) j, [UNK] ( 1 ) j, k. [UNK] ( 20. 34 ) at a fixed point of this system of equations, we have a local maximum of the variational lower bound l ( q ). thus these fixed
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 683
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
( 2 ) j, [UNK] ( 1 ) j, k. [UNK] ( 20. 34 ) at a fixed point of this system of equations, we have a local maximum of the variational lower bound l ( q ). thus these fixed point update equations define an iterative algorithm where we alternate updates of [UNK] ( 1 ) j ( using equation ) and 20. 33 updates of [UNK] ( 2 ) k ( using equation ). on small problems such as mnist, as few 20. 34 as ten iterations can be [UNK] to find an approximate positive phase gradient for learning, and fifty usually [UNK] to obtain a high quality representation of a single specific example to be used for high - accuracy classification. extending approximate variational inference to deeper dbms is straightforward. 20. 4. 3 dbm parameter learning learning in the dbm must confront both the challenge of an intractable partition function, using the techniques from chapter, and the challenge of an intractable 18 posterior distribution, using the techniques from chapter. 19 as described in section, variational inference allows the construction of 20. 4. 2 a distribution q ( h v | ) that approximates the intractable p ( h v | ). learning then proceeds
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 683
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
distribution, using the techniques from chapter. 19 as described in section, variational inference allows the construction of 20. 4. 2 a distribution q ( h v | ) that approximates the intractable p ( h v | ). learning then proceeds by maximizing l ( v θ, q, ), the variational lower bound on the intractable log - likelihood,. log ( ; ) p v θ 668
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 683
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 20. deep generative models for a deep boltzmann machine with two hidden layers, is given by l l ( ) = q, θ i j viw ( 1 ) i, [UNK] ( 1 ) j + j k [UNK] ( 1 ) j w ( 2 ) j, [UNK] ( 2 ) k− h log ( ) + z θ ( ) q. ( 20. 35 ) this expression still contains the log partition function, log z ( θ ). because a deep boltzmann machine contains restricted boltzmann machines as components, the hardness results for computing the partition function and sampling that apply to restricted boltzmann machines also apply to deep boltzmann machines. this means that evaluating the probability mass function of a boltzmann machine requires approximate methods such as annealed importance sampling. likewise, training the model requires approximations to the gradient of the log partition function. see chapter for a general description of these methods. dbms are typically trained 18 using stochastic maximum likelihood. many of the other techniques described in chapter are not applicable. techniques such as pseudolikelihood require the 18 ability to evaluate the unnormalized probabilities, rather than merely obtain a variational lower bound on them. contrastive divergence is slow for
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 684
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
other techniques described in chapter are not applicable. techniques such as pseudolikelihood require the 18 ability to evaluate the unnormalized probabilities, rather than merely obtain a variational lower bound on them. contrastive divergence is slow for deep boltzmann machines because they do not allow [UNK] sampling of the hidden units given the visible units — instead, contrastive divergence would require burning in a markov chain every time a new negative phase sample is needed. the non - variational version of stochastic maximum likelihood algorithm was discussed earlier, in section. variational stochastic maximum likelihood as 18. 2 applied to the dbm is given in algorithm. recall that we describe a simplified 20. 1 varient of the dbm that lacks bias parameters ; including them is trivial. 20. 4. 4 layer - wise pretraining unfortunately, training a dbm using stochastic maximum likelihood ( as described above ) from a random initialization usually results in failure. in some cases, the model fails to learn to represent the distribution adequately. in other cases, the dbm may represent the distribution well, but with no higher likelihood than could be obtained with just an rbm. a dbm with very small weights in all but
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 684
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
, the model fails to learn to represent the distribution adequately. in other cases, the dbm may represent the distribution well, but with no higher likelihood than could be obtained with just an rbm. a dbm with very small weights in all but the first layer represents approximately the same distribution as an rbm. various techniques that permit joint training have been developed and are described in section. however, the original and most popular method for 20. 4. 5 overcoming the joint training problem of dbms is greedy layer - wise pretraining. in this method, each layer of the dbm is trained in isolation as an rbm. the first layer is trained to model the input data. each subsequent rbm is trained to model samples from the previous rbm ’ s posterior distribution. after all of the 669
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 684
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 20. deep generative models algorithm 20. 1 the variational stochastic maximum likelihood algorithm for training a dbm with two hidden layers. set, the step size, to a small positive number set k, the number of gibbs steps, high enough to allow a markov chain of p ( v h, ( 1 ), h ( 2 ) ; θ + ∆θ ) to burn in, starting from samples from p ( v h, ( 1 ), h ( 2 ) ; θ ). initialize three matrices, [UNK], [UNK] h ( 1 ) and [UNK] h ( 2 ) each with m rows set to random values ( e. g., from bernoulli distributions, possibly with marginals matched to the model ’ s marginals ). while not converged ( learning loop ) do sample a minibatch of m examples from the training data and arrange them as the rows of a design matrix. v initialize matrices [UNK] h ( 1 ) and [UNK] h ( 2 ), possibly to the model ’ s marginals. while not converged ( mean field inference loop ) do [UNK] h ( 1 ) ←σ v w ( 1 ) + [UNK] h ( 2 ) w ( 2 ). [UNK] h ( 2 ) ←σ [UNK] h
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 685
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
model ’ s marginals. while not converged ( mean field inference loop ) do [UNK] h ( 1 ) ←σ v w ( 1 ) + [UNK] h ( 2 ) w ( 2 ). [UNK] h ( 2 ) ←σ [UNK] h ( 1 ) w ( 2 ). end while ∆w ( 1 ) ←1 mv [UNK] h ( 1 ) ∆w ( 2 ) ←1 m [UNK] h ( 1 ) [UNK] ( 2 ) for do l k = 1 to ( gibbs sampling ) gibbs block 1 : [UNK], j, [UNK] i, j sampled from p ( [UNK], j = 1 ) = σ w ( 1 ) j, : [UNK] ( 1 ) i, :. [UNK], j, [UNK] ( 2 ) i, j sampled from p ( [UNK] ( 2 ) i, j = 1 ) = σ [UNK] h ( 1 ) i, : w ( 2 ) :, j. gibbs block 2 : [UNK], j, [UNK] ( 1 ) i, j sampled from p ( [UNK] ( 1 ) i, j = 1 ) = σ [UNK], : w ( 1 ) :, j + [UNK] h ( 2 ) i, : w ( 2 ) j, :. end for ∆w ( 1 ) ←∆w ( 1 ) −1
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 685
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
i, j = 1 ) = σ [UNK], : w ( 1 ) :, j + [UNK] h ( 2 ) i, : w ( 2 ) j, :. end for ∆w ( 1 ) ←∆w ( 1 ) −1 mv [UNK] h ( 1 ) ∆w ( 2 ) ←∆w ( 2 ) −1 m [UNK] h ( 1 ) [UNK] h ( 2 ) w ( 1 ) ←w ( 1 ) + ∆w ( 1 ) ( this is a cartoon illustration, in practice use a more [UNK] algorithm, such as momentum with a decaying learning rate ) w ( 2 ) ←w ( 2 ) + ∆ w ( 2 ) end while 670
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 685
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 20. deep generative models rbms have been trained in this way, they can be combined to form a dbm. the dbm may then be trained with pcd. typically pcd training will make only a small change in the model ’ s parameters and its performance as measured by the log - likelihood it assigns to the data, or its ability to classify inputs. see figure 20. 4 for an illustration of the training procedure. this greedy layer - wise training procedure is not just coordinate ascent. it bears some passing resemblance to coordinate ascent because we optimize one subset of the parameters at each step. the two methods [UNK] because the greedy layer - wise training procedure uses a [UNK] objective function at each step. greedy layer - wise pretraining of a dbm [UNK] from greedy layer - wise pre - training of a dbn. the parameters of each individual rbm may be copied to the corresponding dbn directly. in the case of the dbm, the rbm parameters must be modified before inclusion in the dbm. a layer in the middle of the stack of rbms is trained with only bottom - up input, but after the stack is combined to form the dbm, the layer will have both bottom - up and top
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 686
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##ed before inclusion in the dbm. a layer in the middle of the stack of rbms is trained with only bottom - up input, but after the stack is combined to form the dbm, the layer will have both bottom - up and top - down input. to account for this [UNK], salakhutdinov and hinton 2009a ( ) advocate dividing the weights of all but the top and bottom rbm in half before inserting them into the dbm. additionally, the bottom rbm must be trained using two “ copies ” of each visible unit and the weights tied to be equal between the two copies. this means that the weights are [UNK] doubled during the upward pass. similarly, the top rbm should be trained with two copies of the topmost layer. obtaining the state of the art results with the deep boltzmann machine requires a modification of the standard sml algorithm, which is to use a small amount of mean field during the negative phase of the joint pcd training step ( salakhutdinov and hinton 2009a, ). specifically, the expectation of the energy gradient should be computed with respect to the mean field distribution in which all of the units are independent from each other. the
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 686
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
( salakhutdinov and hinton 2009a, ). specifically, the expectation of the energy gradient should be computed with respect to the mean field distribution in which all of the units are independent from each other. the parameters of this mean field distribution should be obtained by running the mean field fixed point equations for just one step. see ( ) for a comparison of the performance of centered goodfellow et al. 2013b dbms with and without the use of partial mean field in the negative phase. 20. 4. 5 jointly training deep boltzmann machines classic dbms require greedy unsupervised pretraining, and to perform classification well, require a separate mlp - based classifier on top of the hidden features they extract. this has some undesirable properties. it is hard to track performance during training because we cannot evaluate properties of the full dbm while training the first rbm. thus, it is hard to tell how well our hyperparameters 671
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 686
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 20. deep generative models d ) a ) b ) c ) figure 20. 4 : the deep boltzmann machine training procedure used to classify the mnist dataset ( salakhutdinov and hinton 2009a srivastava 2014, ; et al., ). train an rbm ( a ) by using cd to approximately maximize log p ( v ). train a second rbm that models ( b ) h ( 1 ) and target class y by using cd - k to approximately maximize log p ( h ( 1 ), y ) where h ( 1 ) is drawn from the first rbm ’ s posterior conditioned on the data. increasek from 1 to 20 during learning. combine the two rbms into a dbm. train it to approximately ( c ) maximize log p ( v, y ) using stochastic maximum likelihood with k = 5. delete ( d ) y from the model. define a new set of features h ( 1 ) and h ( 2 ) that are obtained by running mean field inference in the model lacking y. use these features as input to an mlp whose structure is the same as an additional pass of mean field, with an additional output layer for the estimate of y
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 687
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
2 ) that are obtained by running mean field inference in the model lacking y. use these features as input to an mlp whose structure is the same as an additional pass of mean field, with an additional output layer for the estimate of y. initialize the mlp ’ s weights to be the same as the dbm ’ s weights. train the mlp to approximately maximize log p ( y | v ) using stochastic gradient descent and dropout. figure reprinted from (, ). goodfellow et al. 2013b 672
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 687
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 20. deep generative models are working until quite late in the training process. software implementations of dbms need to have many [UNK] components for cd training of individual rbms, pcd training of the full dbm, and training based on back - propagation through the mlp. finally, the mlp on top of the boltzmann machine loses many of the advantages of the boltzmann machine probabilistic model, such as being able to perform inference when some input values are missing. there are two main ways to resolve the joint training problem of the deep boltzmann machine. the first is the centered deep boltzmann machine ( montavon and muller 2012, ), which reparametrizes the model in order to make the hessian of the cost function better - conditioned at the beginning of the learning process. this yields a model that can be trained without a greedy layer - wise pretraining stage. the resulting model obtains excellent test set log - likelihood and produces high quality samples. unfortunately, it remains unable to compete with appropriately regularized mlps as a classifier. the second way to jointly train a deep boltzmann machine is to use a multi - prediction deep boltzmann machine ( goodfellow 2013
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 688
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
. unfortunately, it remains unable to compete with appropriately regularized mlps as a classifier. the second way to jointly train a deep boltzmann machine is to use a multi - prediction deep boltzmann machine ( goodfellow 2013b et al., ). this model uses an alternative training criterion that allows the use of the back - propagation algorithm in order to avoid the problems with mcmc estimates of the gradient. unfortunately, the new criterion does not lead to good likelihood or samples, but, compared to the mcmc approach, it does lead to superior classification performance and ability to reason well about missing inputs. the centering trick for the boltzmann machine is easiest to describe if we return to the general view of a boltzmann machine as consisting of a set of units x with a weight matrix u and biases b. recall from equation that he energy 20. 2 function is given by e ( ) = x −xux b − x. ( 20. 36 ) using [UNK] sparsity patterns in the weight matrix u, we can implement structures of boltzmann machines, such as rbms, or dbms with [UNK] numbers of layers. this is accomplished by partitioning x into visible and hidden units and zeroing
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 688
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
[UNK] sparsity patterns in the weight matrix u, we can implement structures of boltzmann machines, such as rbms, or dbms with [UNK] numbers of layers. this is accomplished by partitioning x into visible and hidden units and zeroing out elements of u for units that do not interact. the centered boltzmann machine introduces a vector that is subtracted from all of the states : µ e ( ; ) = ( ) x u b, −x µ − u x µ x µ ( − ) ( − − ) b. ( 20. 37 ) typically µ is a hyperparameter fixed at the beginning of training. it is usu - ally chosen to make sure that x µ − ≈0 when the model is initialized. this reparametrization does not change the set of probability distributions that the model can represent, but it does change the dynamics of stochastic gradient descent applied to the likelihood. specifically, in many cases, this reparametrization results 673
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 688
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 20. deep generative models in a hessian matrix that is better conditioned. ( ) experimentally melchior et al. 2013 confirmed that the conditioning of the hessian matrix improves, and observed that the centering trick is equivalent to another boltzmann machine learning technique, the enhanced gradient (, ). the improved conditioning of the cho et al. 2011 hessian matrix allows learning to succeed, even in [UNK] cases like training a deep boltzmann machine with multiple layers. the other approach to jointly training deep boltzmann machines is the multi - prediction deep boltzmann machine ( mp - dbm ) which works by viewing the mean field equations as defining a family of recurrent networks for approximately solving every possible inference problem (, ). rather than training goodfellow et al. 2013b the model to maximize the likelihood, the model is trained to make each recurrent network obtain an accurate answer to the corresponding inference problem. the training process is illustrated in figure. it consists of randomly sampling a 20. 5 training example, randomly sampling a subset of inputs to the inference network, and then training the inference network to predict the values of the remaining units. this general principle of back - propagating through the computational graph
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 689
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
consists of randomly sampling a 20. 5 training example, randomly sampling a subset of inputs to the inference network, and then training the inference network to predict the values of the remaining units. this general principle of back - propagating through the computational graph for approximate inference has been applied to other models ( stoyanov 2011 et al., ; brakel 2013 et al., ). in these models and in the mp - dbm, the final loss is not the lower bound on the likelihood. instead, the final loss is typically based on the approximate conditional distribution that the approximate inference network imposes over the missing values. this means that the training of these models is somewhat heuristically motivated. if we inspect the p ( v ) represented by the boltzmann machine learned by the mp - dbm, it tends to be somewhat defective, in the sense that gibbs sampling yields poor samples. back - propagation through the inference graph has two main advantages. first, it trains the model as it is really used — with approximate inference. this means that approximate inference, for example, to fill in missing inputs, or to perform classification despite the presence of missing inputs, is more accurate in the mp - dbm than in the original db
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 689
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
used — with approximate inference. this means that approximate inference, for example, to fill in missing inputs, or to perform classification despite the presence of missing inputs, is more accurate in the mp - dbm than in the original dbm. the original dbm does not make an accurate classifier on its own ; the best classification results with the original dbm were based on training a separate classifier to use features extracted by the dbm, rather than by using inference in the dbm to compute the distribution over the class labels. mean field inference in the mp - dbm performs well as a classifier without special modifications. the other advantage of back - propagating through approximate inference is that back - propagation computes the exact gradient of the loss. this is better for optimization than the approximate gradients of sml training, which [UNK] from both bias and variance. this probably explains why mp - 674
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 689
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 20. deep generative models figure 20. 5 : an illustration of the multi - prediction training process for a deep boltzmann machine. each row indicates a [UNK] example within a minibatch for the same training step. each column represents a time step within the mean field inference process. for each example, we sample a subset of the data variables to serve as inputs to the inference process. these variables are shaded black to indicate conditioning. we then run the mean field inference process, with arrows indicating which variables influence which other variables in the process. in practical applications, we unroll mean field for several steps. in this illustration, we unroll for only two steps. dashed arrows indicate how the process could be unrolled for more steps. the data variables that were not used as inputs to the inference process become targets, shaded in gray. we can view the inference process for each example as a recurrent network. we use gradient descent and back - propagation to train these recurrent networks to produce the correct targets given their inputs. this trains the mean field process for the mp - dbm to produce accurate estimates. figure adapted from ( ). goodfellow et al. 2013b 675
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 690
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 20. deep generative models dbms may be trained jointly while dbms require a greedy layer - wise pretraining. the disadvantage of back - propagating through the approximate inference graph is that it does not provide a way to optimize the log - likelihood, but rather a heuristic approximation of the generalized pseudolikelihood. the mp - dbm inspired the nade - k ( raiko 2014 et al., ) extension to the nade framework, which is described in section. 20. 10. 10 the mp - dbm has some connections to dropout. dropout shares the same pa - rameters among many [UNK] computational graphs, with the [UNK] between each graph being whether it includes or excludes each unit. the mp - dbm also shares parameters across many computational graphs. in the case of the mp - dbm, the [UNK] between the graphs is whether each input unit is observed or not. when a unit is not observed, the mp - dbm does not delete it entirely as dropout does. instead, the mp - dbm treats it as a latent variable to be inferred. one could imagine applying dropout to the mp - dbm by additionally removing some units rather than making them late
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 691
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##ete it entirely as dropout does. instead, the mp - dbm treats it as a latent variable to be inferred. one could imagine applying dropout to the mp - dbm by additionally removing some units rather than making them latent. 20. 5 boltzmann machines for real - valued data while boltzmann machines were originally developed for use with binary data, many applications such as image and audio modeling seem to require the ability to represent probability distributions over real values. in some cases, it is possible to treat real - valued data in the interval [ 0, 1 ] as representing the expectation of a binary variable. for example, ( ) treats grayscale images in the training hinton 2000 set as defining [ 0, 1 ] probability values. each pixel defines the probability of a binary value being 1, and the binary pixels are all sampled independently from each other. this is a common procedure for evaluating binary models on grayscale image datasets. however, it is not a particularly theoretically satisfying approach, and binary images sampled independently in this way have a noisy appearance. in this section, we present boltzmann machines that define a probability density over real - valued data. 20. 5. 1 gauss
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 691
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
not a particularly theoretically satisfying approach, and binary images sampled independently in this way have a noisy appearance. in this section, we present boltzmann machines that define a probability density over real - valued data. 20. 5. 1 gaussian - bernoulli rbms restricted boltzmann machines may be developed for many exponential family conditional distributions ( welling 2005 et al., ). of these, the most common is the rbm with binary hidden units and real - valued visible units, with the conditional distribution over the visible units being a gaussian distribution whose mean is a function of the hidden units. 676
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 691
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 20. deep generative models there are many ways of parametrizing gaussian - bernoulli rbms. one choice is whether to use a covariance matrix or a precision matrix for the gaussian distribution. here we present the precision formulation. the modification to obtain the covariance formulation is straightforward. we wish to have the conditional distribution p, ( ) = ( ; v h | n v w h β−1 ). ( 20. 38 ) we can find the terms we need to add to the energy function by expanding the unnormalized log conditional distribution : log ( ; n v w h β, −1 ) = −1 2 ( ) v w h − β v w h β ( − ) + ( f ). ( 20. 39 ) here f encapsulates all the terms that are a function only of the parameters and not the random variables in the model. we can discard f because its only role is to normalize the distribution, and the partition function of whatever energy function we choose will carry out that role. if we include all of the terms ( with their sign flipped ) involving v from equa - tion in our energy function and do not add any other
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 692
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
, and the partition function of whatever energy function we choose will carry out that role. if we include all of the terms ( with their sign flipped ) involving v from equa - tion in our energy function and do not add any other terms involving 20. 39 v, then our energy function will represent the desired conditional. p ( ) v h | we have some freedom regarding the other conditional distribution, p ( h v | ). note that equation contains a term 20. 39 1 2hw βw h. ( 20. 40 ) this term cannot be included in its entirety because it includes hihj terms. these correspond to edges between the hidden units. if we included these terms, we would have a linear factor model instead of a restricted boltzmann machine. when designing our boltzmann machine, we simply omit thesehihj cross terms. omitting them does not change the conditional p ( v h | ) so equation is still respected. 20. 39 however, we still have a choice about whether to include the terms involving only a single hi. if we assume a diagonal precision matrix, we find that for each hidden unit hi we have a term 1 2hi j βjw 2 j, i. ( 20
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 692
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
have a choice about whether to include the terms involving only a single hi. if we assume a diagonal precision matrix, we find that for each hidden unit hi we have a term 1 2hi j βjw 2 j, i. ( 20. 41 ) in the above, we used the fact that h2 i = hi because hi ∈ { 0, 1 }. if we include this term ( with its sign flipped ) in the energy function, then it will naturally bias hi to be turned [UNK] the weights for that unit are large and connected to visible units with high precision. the choice of whether or not to include this bias term does not [UNK] the family of distributions the model can represent ( assuming that 677
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 692
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 20. deep generative models we include bias parameters for the hidden units ) but it does [UNK] the learning dynamics of the model. including the term may help the hidden unit activations remain reasonable even when the weights rapidly increase in magnitude. one way to define the energy function on a gaussian - bernoulli rbm is thus e, ( v h ) = 1 2v ( ) ( ) β v −v β wh b − h ( 20. 42 ) but we may also add extra terms or parametrize the energy in terms of the variance rather than precision if we choose. in this derivation, we have not included a bias term on the visible units, but one could easily be added. one final source of variability in the parametrization of a gaussian - bernoulli rbm is the choice of how to treat the precision matrix. it may either be fixed to a constant ( perhaps estimated based on the marginal precision of the data ) or learned. it may also be a scalar times the identity matrix, or it may be a diagonal matrix. typically we do not allow the precision matrix to be non - diagonal in this context, because some operations on the gaussian distribution require inverting the matrix
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 693
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
also be a scalar times the identity matrix, or it may be a diagonal matrix. typically we do not allow the precision matrix to be non - diagonal in this context, because some operations on the gaussian distribution require inverting the matrix, and a diagonal matrix can be inverted trivially. in the sections ahead, we will see that other forms of boltzmann machines permit modeling the covariance structure, using various techniques to avoid inverting the precision matrix. 20. 5. 2 undirected models of conditional covariance while the gaussian rbm has been the canonical energy model for real - valued data, ( ) argue that the gaussian rbm inductive bias is not ranzato et al. 2010a well suited to the statistical variations present in some types of real - valued data, especially natural images. the problem is that much of the information content present in natural images is embedded in the covariance between pixels rather than in the raw pixel values. in other words, it is the relationships between pixels and not their absolute values where most of the useful information in images resides. since the gaussian rbm only models the conditional mean of the input given the hidden units, it cannot capture conditional covariance
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 693
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
words, it is the relationships between pixels and not their absolute values where most of the useful information in images resides. since the gaussian rbm only models the conditional mean of the input given the hidden units, it cannot capture conditional covariance information. in response to these criticisms, alternative models have been proposed that attempt to better account for the covariance of real - valued data. these models include the mean and covariance rbm ( mcrbm1 ), the mean - product of t - distribution ( mpot ) model and the spike and slab rbm ( ssrbm ). 1the term “ mcrbm ” is pronounced by saying the name of the letters m - c - r - b - m ; the “ mc ” is not pronounced like the “ mc ” in “ mcdonald ’ s. ” 678
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 693
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 20. deep generative models mean and covariance rbm the mcrbm uses its hidden units to indepen - dently encode the conditional mean and covariance of all observed units. the mcrbm hidden layer is divided into two groups of units : mean units and covariance units. the group that models the conditional mean is simply a gaussian rbm. the other half is a covariance rbm (, ), also called a crbm, ranzato et al. 2010a whose components model the conditional covariance structure, as described below. specifically, with binary mean units h ( ) m and binary covariance units h ( ) c, the mcrbm model is defined as the combination of two energy functions : emc ( x h, ( ) m, h ( ) c ) = em ( x h, ( ) m ) + ec ( x h, ( ) c ), ( 20. 43 ) where em is the standard gaussian - bernoulli rbm energy function : 2 em ( x h, ( ) m ) = 1 2 xx − j xw :, jh ( ) m j − j b ( ) m j
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 694
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
where em is the standard gaussian - bernoulli rbm energy function : 2 em ( x h, ( ) m ) = 1 2 xx − j xw :, jh ( ) m j − j b ( ) m j h ( ) m j, ( 20. 44 ) and ec is the crbm energy function that models the conditional covariance information : ec ( x h, ( ) c ) = 1 2 j h ( ) c j xr ( ) j 2 − j b ( ) c j h ( ) c j. ( 20. 45 ) the parameter r ( ) j corresponds to the covariance weight vector associated with h ( ) c j and b ( ) c is a vector of covariance [UNK]. the combined energy function defines a joint distribution : pmc ( x h, ( ) m, h ( ) c ) = 1 z exp −emc ( x h, ( ) m, h ( ) c ), ( 20. 46 ) and a corresponding conditional distribution over the observations given h ( ) m and h ( ) c as a multivariate gaussian distribution : pmc ( x h | ( ) m, h ( ) c ) = n x c ;
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 694
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
46 ) and a corresponding conditional distribution over the observations given h ( ) m and h ( ) c as a multivariate gaussian distribution : pmc ( x h | ( ) m, h ( ) c ) = n x c ; mc x h | j w :, jh ( ) m j, c mc x h |. ( 20. 47 ) note that the covariance matrix cmc x h | = j h ( ) c j r ( ) j r ( ) j + i −1 is non - diagonal and that w is the weight matrix associated with the gaussian rbm modeling the 2this version of the gaussian - bernoulli rbm energy function assumes the image data has zero mean, per pixel. pixel [UNK] can easily be added to the model to account for nonzero pixel means. 679
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 694
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 20. deep generative models conditional means. it is [UNK] to train the mcrbm via contrastive divergence or persistent contrastive divergence because of its non - diagonal conditional covariance structure. cd and pcd require sampling from the joint distribution of x h, ( ) m, h ( ) c which, in a standard rbm, is accomplished by gibbs sampling over the conditionals. however, in the mcrbm, sampling from pmc ( x h | ( ) m, h ( ) c ) requires computing ( cmc ) −1 at every iteration of learning. this can be an impractical computational burden for larger observations. ( ) avoid direct sampling ranzato and hinton 2010 from the conditional pmc ( x h | ( ) m, h ( ) c ) by sampling directly from the marginal p ( x ) using hamiltonian ( hybrid ) monte carlo (, ) on the mcrbm free neal 1993 energy. mean - product of student ’ s - distributions t the mean - product of student ’ s t - distribution ( mpot ) model (, ) extends the pot model ( ranzato et al. 2010b welling et al., ) in a manner similar to how the mcrbm extends the
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 695
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
mean - product of student ’ s t - distribution ( mpot ) model (, ) extends the pot model ( ranzato et al. 2010b welling et al., ) in a manner similar to how the mcrbm extends the crbm. this 2003a is achieved by including nonzero gaussian means by the addition of gaussian rbm - like hidden units. like the mcrbm, the pot conditional distribution over the observation is a multivariate gaussian ( with non - diagonal covariance ) distribution ; however, unlike the mcrbm, the complementary conditional distribution over the hidden variables is given by conditionally independent gamma distributions. the gamma distribution g ( k, θ ) is a probability distribution over positive real numbers, with mean kθ. it is not necessary to have a more detailed understanding of the gamma distribution to understand the basic ideas underlying the mpot model. the mpot energy function is : empot ( x h, ( ) m, h ( ) c ) ( 20. 48 ) = em ( x h, ( ) m ) + j h ( ) c j 1 + 1 2 r ( ) j x 2 + ( 1 −γj ) log h ( ) c j ( 20.
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 695
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
c ) ( 20. 48 ) = em ( x h, ( ) m ) + j h ( ) c j 1 + 1 2 r ( ) j x 2 + ( 1 −γj ) log h ( ) c j ( 20. 49 ) where r ( ) j is the covariance weight vector associated with unith ( ) c j and em ( x h, ( ) m ) is as defined in equation. 20. 44 just as with the mcrbm, the mpot model energy function specifies a mul - tivariate gaussian, with a conditional distribution over x that has non - diagonal covariance. learning in the mpot model — again, like the mcrbm — is compli - cated by the inability to sample from the non - diagonal gaussian conditional pmpot ( x h | ( ) m, h ( ) c ), so ( ) also advocate direct sampling of ranzato et al. 2010b p ( ) x via hamiltonian ( hybrid ) monte carlo. 680
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 695
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 20. deep generative models spike and slab restricted boltzmann machines spike and slab restricted boltzmann machines (, ) or ssrbms provide another means courville et al. 2011 of modeling the covariance structure of real - valued data. compared to mcrbms, ssrbms have the advantage of requiring neither matrix inversion nor hamiltonian monte carlo methods. like the mcrbm and the mpot model, the ssrbm ’ s binary hidden units encode the conditional covariance across pixels through the use of auxiliary real - valued variables. the spike and slab rbm has two sets of hidden units : binary spike units h, and real - valued slab units s. the mean of the visible units conditioned on the hidden units is given by ( h s ) w. in other words, each column w :, i defines a component that can appear in the input when hi = 1. the corresponding spike variable hi determines whether that component is present at all. the corresponding slab variable si determines the intensity of that component, if it is present. when a spike variable is active, the corresponding slab variable adds variance to the input along the axis defined by w :, i. this allows us to model the covariance of
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 696
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
determines the intensity of that component, if it is present. when a spike variable is active, the corresponding slab variable adds variance to the input along the axis defined by w :, i. this allows us to model the covariance of the inputs. fortunately, contrastive divergence and persistent contrastive divergence with gibbs sampling are still applicable. there is no need to invert any matrix. formally, the ssrbm model is defined via its energy function : ess ( ) = x s h,, − i xw :, isihi + 1 2x λ + i φihi x ( 20. 50 ) + 1 2 i αis2 i − i αiµisih i − i bih i + i αiµ2 ihi, ( 20. 51 ) where bi is the [UNK] of the spike hi and λ is a diagonal precision matrix on the observations x. the parameter αi > 0 is a scalar precision parameter for the real - valued slab variable si. the parameter φi is a non - negative diagonal matrix that defines an h - modulated quadratic penalty on x. each µi is a mean parameter for the slab variable si. with the joint distribution defined via
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 696
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
variable si. the parameter φi is a non - negative diagonal matrix that defines an h - modulated quadratic penalty on x. each µi is a mean parameter for the slab variable si. with the joint distribution defined via the energy function, it is relatively straightforward to derive the ssrbm conditional distributions. for example, by marginalizing out the slab variables s, the conditional distribution over the observations given the binary spike variables is given by : h pss ( ) = x h | 1 p ( ) h 1 z exp ( ) { −e x s h,, } ds ( 20. 52 ) = n x c ; ss x h | i w :, iµih i, css x h | ( 20. 53 ) 681
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 696
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 20. deep generative models where c ss x h | = λ + iφihi − i α−1 i hiw :, iw :, i −1. the last equality holds only if the covariance matrix css x h | is positive definite. gating by the spike variables means that the true marginal distribution over h s is sparse. this is [UNK] from sparse coding, where samples from the model “ almost never ” ( in the measure theoretic sense ) contain zeros in the code, and map inference is required to impose sparsity. comparing the ssrbm to the mcrbm and the mpot models, the ssrbm parametrizes the conditional covariance of the observation in a significantly [UNK] way. the mcrbm and mpot both model the covariance structure of the observation as j h ( ) c j r ( ) j r ( ) j + i −1, using the activation of the hidden units hj > 0 to enforce constraints on the conditional covariance in the direction r ( ) j. in contrast, the ssrbm specifies the conditional covariance of the observations using the hidden spike activations hi = 1 to pinch the
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 697
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##j > 0 to enforce constraints on the conditional covariance in the direction r ( ) j. in contrast, the ssrbm specifies the conditional covariance of the observations using the hidden spike activations hi = 1 to pinch the precision matrix along the direction specified by the corresponding weight vector. the ssrbm conditional covariance is very similar to that given by a [UNK] model : the product of probabilistic principal components analysis ( poppca ) ( williams and agakov 2002, ). in the overcomplete setting, sparse activations with the ssrbm parametrization permit significant variance ( above the nominal variance given by λ−1 ) only in the selected directions of the sparsely activated hi. in the mcrbm or mpot models, an overcomplete representation would mean that to capture variation in a particular direction in the observation space requires removing potentially all constraints with positive projection in that direction. this would suggest that these models are less well suited to the overcomplete setting. the primary disadvantage of the spike and slab restricted boltzmann machine is that some settings of the parameters can correspond to a covariance matrix that is not positive definite. such a covariance
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 697
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
to the overcomplete setting. the primary disadvantage of the spike and slab restricted boltzmann machine is that some settings of the parameters can correspond to a covariance matrix that is not positive definite. such a covariance matrix places more unnormalized probability on values that are farther from the mean, causing the integral over all possible outcomes to diverge. generally this issue can be avoided with simple heuristic tricks. there is not yet any theoretically satisfying solution. using constrained optimization to explicitly avoid the regions where the probability is undefined is [UNK] to do without being overly conservative and also preventing the model from accessing high - performing regions of parameter space. qualitatively, convolutional variants of the ssrbm produce excellent samples of natural images. some examples are shown in figure. 16. 1 the ssrbm allows for several extensions. including higher - order interactions and average - pooling of the slab variables (, ) enables the model courville et al. 2014 to learn excellent features for a classifier when labeled data is scarce. adding a 682
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 697
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 20. deep generative models term to the energy function that prevents the partition function from becoming undefined results in a sparse coding model, spike and slab sparse coding ( goodfellow et al., ), also known as s3c. 2013d 20. 6 convolutional boltzmann machines as seen in chapter, extremely high dimensional inputs such as images place 9 great strain on the computation, memory and statistical requirements of machine learning models. replacing matrix multiplication by discrete convolution with a small kernel is the standard way of solving these problems for inputs that have translation invariant spatial or temporal structure. ( ) desjardins and bengio 2008 showed that this approach works well when applied to rbms. deep convolutional networks usually require a pooling operation so that the spatial size of each successive layer decreases. feedforward convolutional networks often use a pooling function such as the maximum of the elements to be pooled. it is unclear how to generalize this to the setting of energy - based models. we could introduce a binary pooling unit p over n binary detector units d and enforce p = maxi di by setting the energy function to be ∞whenever that constraint is violated. this does
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 698
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##ize this to the setting of energy - based models. we could introduce a binary pooling unit p over n binary detector units d and enforce p = maxi di by setting the energy function to be ∞whenever that constraint is violated. this does not scale well though, as it requires evaluating 2n [UNK] energy configurations to compute the normalization constant. for a small 3 × 3 pooling region this requires 29 = 512 energy function evaluations per pooling unit! lee 2009 et al. ( ) developed a solution to this problem called probabilistic max pooling ( not to be confused with “ stochastic pooling, ” which is a technique for implicitly constructing ensembles of convolutional feedforward networks ). the strategy behind probabilistic max pooling is to constrain the detector units so at most one may be active at a time. this means there are only n + 1 total states ( one state for each of the n detector units being on, and an additional state corresponding to all of the detector units being [UNK] ). the pooling unit is on if and only if one of the detector units is on. the state with all units [UNK] assigned energy zero. we can think of this as describing a
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 698
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
an additional state corresponding to all of the detector units being [UNK] ). the pooling unit is on if and only if one of the detector units is on. the state with all units [UNK] assigned energy zero. we can think of this as describing a model with a single variable that has n + 1 states, or equivalently as a model that has n + 1 variables that assigns energy to all but joint assignments of variables. ∞ n + 1 while [UNK], probabilistic max pooling does force the detector units to be mutually exclusive, which may be a useful regularizing constraint in some contexts or a harmful limit on model capacity in other contexts. it also does not support overlapping pooling regions. overlapping pooling regions are usually required to obtain the best performance from feedforward convolutional networks, so this constraint probably greatly reduces the performance of convolutional boltzmann 683
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 698
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 20. deep generative models machines. lee 2009 et al. ( ) demonstrated that probabilistic max pooling could be used to build convolutional deep boltzmann machines. 3 this model is able to perform operations such as filling in missing portions of its input. while intellectually appealing, this model is challenging to make work in practice, and usually does not perform as well as a classifier as traditional convolutional networks trained with supervised learning. many convolutional models work equally well with inputs of many [UNK] spatial sizes. for boltzmann machines, it is [UNK] to change the input size for a variety of reasons. the partition function changes as the size of the input changes. moreover, many convolutional networks achieve size invariance by scaling up the size of their pooling regions proportional to the size of the input, but scaling boltzmann machine pooling regions is awkward. traditional convolutional neural networks can use a fixed number of pooling units and dynamically increase the size of their pooling regions in order to obtain a fixed - size representation of a variable - sized input. for boltzmann machines, large pooling regions become too expensive for the
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 699
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
fixed number of pooling units and dynamically increase the size of their pooling regions in order to obtain a fixed - size representation of a variable - sized input. for boltzmann machines, large pooling regions become too expensive for the naive approach. the approach of ( ) of making lee et al. 2009 each of the detector units in the same pooling region mutually exclusive solves the computational problems, but still does not allow variable - size pooling regions. for example, suppose we learn a model with 2× 2 probabilistic max pooling over detector units that learn edge detectors. this enforces the constraint that only one of these edges may appear in each 2 × 2 region. if we then increase the size of the input image by 50 % in each direction, we would expect the number of edges to increase correspondingly. instead, if we increase the size of the pooling regions by 50 % in each direction to 3× 3, then the mutual exclusivity constraint now specifies that each of these edges may only appear once in a 3 × 3 region. as we grow a model ’ s input image in this way, the model generates edges with less density. of course, these issues only arise when the model must
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 699
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##fies that each of these edges may only appear once in a 3 × 3 region. as we grow a model ’ s input image in this way, the model generates edges with less density. of course, these issues only arise when the model must use variable amounts of pooling in order to emit a fixed - size output vector. models that use probabilistic max pooling may still accept variable - sized input images so long as the output of the model is a feature map that can scale in size proportional to the input image. pixels at the boundary of the image also pose some [UNK], which is exac - erbated by the fact that connections in a boltzmann machine are symmetric. if we do not implicitly zero - pad the input, then there are fewer hidden units than visible units, and the visible units at the boundary of the image are not modeled 3the publication describes the model as a “ deep belief network ” but because it can be described as a purely undirected model with tractable layer - wise mean field fixed point updates, it best fits the definition of a deep boltzmann machine. 684
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 699
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 20. deep generative models well because they lie in the receptive field of fewer hidden units. however, if we do implicitly zero - pad the input, then the hidden units at the boundary are driven by fewer input pixels, and may fail to activate when needed. 20. 7 boltzmann machines for structured or sequential outputs in the structured output scenario, we wish to train a model that can map from some input x to some output y, and the [UNK] entries of y are related to each other and must obey some constraints. for example, in the speech synthesis task, y is a waveform, and the entire waveform must sound like a coherent utterance. a natural way to represent the relationships between the entries in y is to use a probability distribution p ( y | x ). boltzmann machines, extended to model conditional distributions, can supply this probabilistic model. the same tool of conditional modeling with a boltzmann machine can be used not just for structured output tasks, but also for sequence modeling. in the latter case, rather than mapping an input x to an output y, the model must estimate a probability distribution over a sequence of variables, p ( x ( 1 ),..., x ( ) τ )
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 700
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
also for sequence modeling. in the latter case, rather than mapping an input x to an output y, the model must estimate a probability distribution over a sequence of variables, p ( x ( 1 ),..., x ( ) τ ). conditional boltzmann machines can represent factors of the form p ( x ( ) t | x ( 1 ),..., x ( 1 ) t− ) in order to accomplish this task. an important sequence modeling task for the video game and film industry is modeling sequences of joint angles of skeletons used to render 3 - d characters. these sequences are often collected using motion capture systems to record the movements of actors. a probabilistic model of a character ’ s movement allows the generation of new, previously unseen, but realistic animations. to solve this sequence modeling task, taylor 2007 et al. ( ) introduced a conditional rbm modeling p ( x ( ) t | x ( 1 ) t−,..., x ( ) t m − ) for small m. the model is an rbm over p ( x ( ) t ) whose bias parameters are a linear function of the preceding m values of x. when we condition on [UNK] values of x ( 1 ) t− and earlier
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 700
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
− ) for small m. the model is an rbm over p ( x ( ) t ) whose bias parameters are a linear function of the preceding m values of x. when we condition on [UNK] values of x ( 1 ) t− and earlier variables, we get a new rbm over x. the weights in the rbm over x never change, but by conditioning on [UNK] past values, we can change the probability of [UNK] hidden units in the rbm being active. by activating and deactivating [UNK] subsets of hidden units, we can make large changes to the probability distribution induced on x. other variants of conditional rbm (, ) and other variants of sequence mnih et al. 2011 modeling using conditional rbms are possible ( taylor and hinton 2009 sutskever, ; et al., ; 2009 boulanger - lewandowski 2012 et al., ). another sequence modeling task is to model the distribution over sequences 685
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 700
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 20. deep generative models of musical notes used to compose songs. boulanger - lewandowski 2012 et al. ( ) introduced the rnn - rbm sequence model and applied it to this task. the rnn - rbm is a generative model of a sequence of frames x ( ) t consisting of an rnn that emits the rbm parameters for each time step. unlike previous approaches in which only the bias parameters of the rbm varied from one time step to the next, the rnn - rbm uses the rnn to emit all of the parameters of the rbm, including the weights. to train the model, we need to be able to back - propagate the gradient of the loss function through the rnn. the loss function is not applied directly to the rnn outputs. instead, it is applied to the rbm. this means that we must approximately [UNK] the loss with respect to the rbm parameters using contrastive divergence or a related algorithm. this approximate gradient may then be back - propagated through the rnn using the usual back - propagation through time algorithm. 20. 8 other boltzmann machines many other variants of boltzmann machines are possible. boltzmann machines may be extended
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 701
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
approximate gradient may then be back - propagated through the rnn using the usual back - propagation through time algorithm. 20. 8 other boltzmann machines many other variants of boltzmann machines are possible. boltzmann machines may be extended with [UNK] training criteria. we have focused on boltzmann machines trained to approximately maximize the generative criterion log p ( v ). it is also possible to train discriminative rbms that aim to maximize log p ( y | v ) instead (, ). this approach often larochelle and bengio 2008 performs the best when using a linear combination of both the generative and the discriminative criteria. unfortunately, rbms do not seem to be as powerful supervised learners as mlps, at least using existing methodology. most boltzmann machines used in practice have only second - order interactions in their energy functions, meaning that their energy functions are the sum of many terms and each individual term only includes the product between two random variables. an example of such a term is viwi, jhj. it is also possible to train higher - order boltzmann machines (, ) whose energy function terms sejnowski 1987 involve the products between many variables. three - way interactions between a hidden unit
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 701
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
such a term is viwi, jhj. it is also possible to train higher - order boltzmann machines (, ) whose energy function terms sejnowski 1987 involve the products between many variables. three - way interactions between a hidden unit and two [UNK] images can model spatial transformations from one frame of video to the next ( memisevic and hinton 2007 2010,, ). multiplication by a one - hot class variable can change the relationship between visible and hidden units depending on which class is present (, ). one recent example nair and hinton 2009 of the use of higher - order interactions is a boltzmann machine with two groups of hidden units, with one group of hidden units that interact with both the visible units v and the class label y, and another group of hidden units that interact only with the v input values (, ). this can be interpreted as encouraging luo et al. 2011 686
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 701
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 20. deep generative models some hidden units to learn to model the input using features that are relevant to the class but also to learn extra hidden units that explain nuisance details that are necessary for the samples of v to be realistic but do not determine the class of the example. another use of higher - order interactions is to gate some features. sohn 2013 et al. ( ) introduced a boltzmann machine with third - order interactions with binary mask variables associated with each visible unit. when these masking variables are set to zero, they remove the influence of a visible unit on the hidden units. this allows visible units that are not relevant to the classification problem to be removed from the inference pathway that estimates the class. more generally, the boltzmann machine framework is a rich space of models permitting many more model structures than have been explored so far. developing a new form of boltzmann machine requires some more care and creativity than developing a new neural network layer, because it is often [UNK] to find an energy function that maintains tractability of all of the [UNK] conditional distributions needed to use the boltzmann machine, but despite this required [UNK] the field remains open to innovation. 20. 9 back - propagation through random operations traditional
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 702
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
[UNK] to find an energy function that maintains tractability of all of the [UNK] conditional distributions needed to use the boltzmann machine, but despite this required [UNK] the field remains open to innovation. 20. 9 back - propagation through random operations traditional neural networks implement a deterministic transformation of some input variables x. when developing generative models, we often wish to extend neural networks to implement stochastic transformations of x. one straightforward way to do this is to augment the neural network with extra inputs z that are sampled from some simple probability distribution, such as a uniform or gaussian distribution. the neural network can then continue to perform deterministic computation internally, but the function f ( x z, ) will appear stochastic to an observer who does not have access to z. provided that f is continuous and [UNK], we can then compute the gradients necessary for training using back - propagation as usual. as an example, let us consider the operation consisting of drawing samples y from a gaussian distribution with mean and variance µ σ2 : y [UNK] ( µ, σ2 ). ( 20. 54 ) because an individual sample of y is not produced by a function, but rather by a sampling process whose output changes every time we query it,
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 702
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
with mean and variance µ σ2 : y [UNK] ( µ, σ2 ). ( 20. 54 ) because an individual sample of y is not produced by a function, but rather by a sampling process whose output changes every time we query it, it may seem counterintuitive to take the derivatives of y with respect to the parameters of its distribution, µ and σ2. however, we can rewrite the sampling process as 687
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 702
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 20. deep generative models transforming an underlying random value z [UNK] ( z ; 0, 1 ) to obtain a sample from the desired distribution : y µ σz = + ( 20. 55 ) we are now able to back - propagate through the sampling operation, by regard - ing it as a deterministic operation with an extra input z. crucially, the extra input is a random variable whose distribution is not a function of any of the variables whose derivatives we want to calculate. the result tells us how an infinitesimal change in µ or σ would change the output if we could repeat the sampling operation again with the same value of z. being able to back - propagate through this sampling operation allows us to incorporate it into a larger graph. we can build elements of the graph on top of the output of the sampling distribution. for example, we can compute the derivatives of some loss function j ( y ). we can also build elements of the graph whose outputs are the inputs or the parameters of the sampling operation. for example, we could build a larger graph with µ = f ( x ; θ ) and σ = g ( x ; θ ). in this augmented graph, we can use back - propagation through these functions
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 703
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
or the parameters of the sampling operation. for example, we could build a larger graph with µ = f ( x ; θ ) and σ = g ( x ; θ ). in this augmented graph, we can use back - propagation through these functions to derive ∇θj y ( ). the principle used in this gaussian sampling example is more generally appli - cable. we can express any probability distribution of the form p ( y ; θ ) or p ( y | x ; θ ) as p ( y | ω ), where ω is a variable containing both parameters θ, and if applicable, the inputs x. given a value y sampled from distribution p ( y | ω ), where ω may in turn be a function of other variables, we can rewrite y y [UNK] ( | ω ) ( 20. 56 ) as y z ω = ( f ; ), ( 20. 57 ) where z is a source of randomness. we may then compute the derivatives of y with respect to ω using traditional tools such as the back - propagation algorithm applied to f, so long as f is continuous and [UNK] almost everywhere. crucially, ω must not be a function of z, and z must not be a function of ω. this technique is often
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 703
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
traditional tools such as the back - propagation algorithm applied to f, so long as f is continuous and [UNK] almost everywhere. crucially, ω must not be a function of z, and z must not be a function of ω. this technique is often called the reparametrization trick, stochastic back - propagation or perturbation analysis. the requirement that f be continuous and [UNK] of course requires y to be continuous. if we wish to back - propagate through a sampling process that produces discrete - valued samples, it may still be possible to estimate a gradient on ω, using reinforcement learning algorithms such as variants of the reinforce algorithm (, ), discussed in section. williams 1992 20. 9. 1 688
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 703
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 20. deep generative models in neural network applications, we typically choose z to be drawn from some simple distribution, such as a unit uniform or unit gaussian distribution, and achieve more complex distributions by allowing the deterministic portion of the network to reshape its input. the idea of propagating gradients or optimizing through stochastic operations dates back to the mid - twentieth century (, ;, ) and was price 1958 bonnet 1964 first used for machine learning in the context of reinforcement learning (, williams 1992 ). more recently, it has been applied to variational approximations ( opper and archambeau 2009, ) and stochastic or generative neural networks ( bengio et al., ;, ; 2013b kingma 2013 kingma and welling 2014b a rezende 2014,, ; et al., ; goodfellow 2014c et al., ). many networks, such as denoising autoencoders or networks regularized with dropout, are also naturally designed to take noise as an input without requiring any special reparametrization to make the noise independent from the model. 20. 9. 1 back - propagating through discrete stochastic operations when a model emi
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 704
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
dropout, are also naturally designed to take noise as an input without requiring any special reparametrization to make the noise independent from the model. 20. 9. 1 back - propagating through discrete stochastic operations when a model emits a discrete variable y, the reparametrization trick is not applicable. suppose that the model takes inputs x and parameters θ, both encapsulated in the vector ω, and combines them with random noise z to produce y : y z ω = ( f ; ). ( 20. 58 ) because y is discrete, f must be a step function. the derivatives of a step function are not useful at any point. right at each step boundary, the derivatives are undefined, but that is a small problem. the large problem is that the derivatives are zero almost everywhere, on the regions between step boundaries. the derivatives of any cost function j ( y ) therefore do not give any information for how to update the model parameters. θ the reinforce algorithm ( reward increment = non - negative factor × [UNK] reinforcement × characteristic eligibility ) provides a framework defining a family of simple but powerful solutions (, ). the core idea is that williams 1992 even though j ( f ( z ; ω )
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 704
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
reward increment = non - negative factor × [UNK] reinforcement × characteristic eligibility ) provides a framework defining a family of simple but powerful solutions (, ). the core idea is that williams 1992 even though j ( f ( z ; ω ) ) is a step function with useless derivatives, the expected cost ez z [UNK] ( ) j f ( ( ; ) ) z ω is often a smooth function amenable to gradient descent. although that expectation is typically not tractable when y is high - dimensional ( or is the result of the composition of many discrete stochastic decisions ), it can be estimated without bias using a monte carlo average. the stochastic estimate of the gradient can be used with sgd or other stochastic gradient - based optimization techniques. 689
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 704
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 20. deep generative models the simplest version of reinforce can be derived by simply [UNK] the expected cost : ez [ ( ) ] = j y y j p ( ) y ( ) y ( 20. 59 ) ∂ j e [ ( ) ] y ∂ω = y j ( ) y ∂p ( ) y ∂ω ( 20. 60 ) = y j p ( ) y ( ) y ∂ p log ( ) y ∂ω ( 20. 61 ) ≈1 m m y ( ) i [UNK], i ( ) y = 1 j ( y ( ) i ) ∂ p log ( y ( ) i ) ∂ω. ( 20. 62 ) equation relies on the assumption that 20. 60 j does not reference ω directly. it is trivial to extend the approach to relax this assumption. equation exploits 20. 61 the derivative rule for the logarithm, ∂ p log ( ) y ∂ω = 1 p ( ) y ∂p ( ) y ∂ω. equation gives 20. 62 an unbiased monte carlo estimator of the gradient. anywhere we write p ( y ) in this section, one could equally write p ( y x | ). this is because p ( y ) is parametrized by ω, and
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 705
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
an unbiased monte carlo estimator of the gradient. anywhere we write p ( y ) in this section, one could equally write p ( y x | ). this is because p ( y ) is parametrized by ω, and ω contains both θ and x, if x is present. one issue with the above simple reinforce estimator is that it has a very high variance, so that many samples of y need to be drawn to obtain a good estimator of the gradient, or equivalently, if only one sample is drawn, sgd will converge very slowly and will require a smaller learning rate. it is possible to considerably reduce the variance of that estimator by using variance reduction methods (, ;, ). the idea is to modify the estimator so wilson 1984 l ’ ecuyer 1994 that its expected value remains unchanged but its variance get reduced. in the context of reinforce, the proposed variance reduction methods involve the computation of a baseline that is used to [UNK] j ( y ). note that any [UNK] b ( ω ) that does not depend on y would not change the expectation of the estimated gradient because ep ( ) y ∂ p log ( ) y ∂ω = y p ( ) y ∂ p log ( ) y
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 705
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
). note that any [UNK] b ( ω ) that does not depend on y would not change the expectation of the estimated gradient because ep ( ) y ∂ p log ( ) y ∂ω = y p ( ) y ∂ p log ( ) y ∂ω ( 20. 63 ) = y ∂p ( ) y ∂ω ( 20. 64 ) = ∂ ∂ω y p ( ) = y ∂ ∂ω1 = 0, ( 20. 65 ) 690
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 705
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 20. deep generative models which means that ep ( ) y ( ( ) ( ) ) j y −b ω ∂ p log ( ) y ∂ω = ep ( ) y j ( ) y ∂ p log ( ) y ∂ω −b e ( ) ω p ( ) y ∂ p log ( ) y ∂ω ( 20. 66 ) = ep ( ) y j ( ) y ∂ p log ( ) y ∂ω. ( 20. 67 ) furthermore, we can obtain the optimal b ( ω ) by computing the variance of ( j ( y ) − b ( ω ) ) ∂ p log ( ) y ∂ω under p ( y ) and minimizing with respect to b ( ω ). what we find is that this optimal baseline b∗ ( ) ω i is [UNK] for each element ωi of the vector : ω b∗ ( ) ω i = ep ( ) y j ( ) y ∂ p log ( ) y ∂ωi 2 ep ( ) y ∂ p log ( ) y ∂ωi 2. ( 20. 68 ) the gradient estimator with respect to ωi then becomes ( ( ) ( ) j y −b ω i ) ∂ p log ( ) y ∂ωi ( 20.
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 706
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
log ( ) y ∂ωi 2. ( 20. 68 ) the gradient estimator with respect to ωi then becomes ( ( ) ( ) j y −b ω i ) ∂ p log ( ) y ∂ωi ( 20. 69 ) where b ( ω ) i estimates the above b∗ ( ω ) i. the estimate b is usually obtained by adding extra outputs to the neural network and training the new outputs to estimate ep ( ) y [ j ( y ) ∂ p log ( ) y ∂ω i 2 ] and ep ( ) y ∂ p log ( ) y ∂ωi 2 for each element of ω. these extra outputs can be trained with the mean squared error objective, using respectively j ( y ) ∂ p log ( ) y ∂ωi 2 and ∂ p log ( ) y ∂ωi 2 as targets when y is sampled from p ( y ), for a given ω. the estimate b may then be recovered by substituting these estimates into equation. ( ) preferred to use a single shared output 20. 68 mnih and gregor 2014 ( across all elements i of ω ) trained with the target j ( y ), using as baseline b ( ω ) ≈ ep ( ) y [ ( ) ] j y. variance
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 706
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
use a single shared output 20. 68 mnih and gregor 2014 ( across all elements i of ω ) trained with the target j ( y ), using as baseline b ( ω ) ≈ ep ( ) y [ ( ) ] j y. variance reduction methods have been introduced in the reinforcement learning context (, ; sutton et al. 2000 weaver and tao 2001, ), generalizing previous work on the case of binary reward by dayan 1990 bengio 2013b mnih ( ). see et al. ( ), and gregor 2014 ba 2014 mnih 2014 xu 2015 ( ), et al. ( ), et al. ( ), or et al. ( ) for examples of modern uses of the reinforce algorithm with reduced variance in the context of deep learning. in addition to the use of an input - dependent baseline b ( ω ) (, ( ) found that the scale of mnih and gregor 2014 j ( y ) −b ( ω ) ) could be adjusted during training by dividing it by its standard deviation estimated by a moving average during training, as a kind of adaptive learning rate, to counter the [UNK] of important variations that occur during the course of training in the 691
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 706
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 20. deep generative models magnitude of this quantity. ( ) called this heuristic mnih and gregor 2014 variance normalization. reinforce - based estimators can be understood as estimating the gradient by correlating choices of y with corresponding values of j ( y ). if a good value of y is unlikely under the current parametrization, it might take a long time to obtain it by chance, and get the required signal that this configuration should be reinforced. 20. 10 directed generative nets as discussed in chapter, directed graphical models make up a prominent class 16 of graphical models. while directed graphical models have been very popular within the greater machine learning community, within the smaller deep learning community they have until roughly 2013 been overshadowed by undirected models such as the rbm. in this section we review some of the standard directed graphical models that have traditionally been associated with the deep learning community. we have already described deep belief networks, which are a partially directed model. we have also already described sparse coding models, which can be thought of as shallow directed generative models. they are often used as feature learners in the context of deep learning, though they tend to perform poorly at sample generation and density estimation. we now describe a variety of
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 707
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
described sparse coding models, which can be thought of as shallow directed generative models. they are often used as feature learners in the context of deep learning, though they tend to perform poorly at sample generation and density estimation. we now describe a variety of deep, fully directed models. 20. 10. 1 sigmoid belief nets sigmoid belief networks (, ) are a simple form of directed graphical model neal 1990 with a specific kind of conditional probability distribution. in general, we can think of a sigmoid belief network as having a vector of binary states s, with each element of the state influenced by its ancestors : p s ( i ) = σ j < i wj, isj + bi. ( 20. 70 ) the most common structure of sigmoid belief network is one that is divided into many layers, with ancestral sampling proceeding through a series of many hidden layers and then ultimately generating the visible layer. this structure is very similar to the deep belief network, except that the units at the beginning of 692
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 707
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 20. deep generative models the sampling process are independent from each other, rather than sampled from a restricted boltzmann machine. such a structure is interesting for a variety of reasons. one reason is that the structure is a universal approximator of probability distributions over the visible units, in the sense that it can approximate any probability distribution over binary variables arbitrarily well, given enough depth, even if the width of the individual layers is restricted to the dimensionality of the visible layer ( sutskever and hinton 2008, ). while generating a sample of the visible units is very [UNK] in a sigmoid belief network, most other operations are not. inference over the hidden units given the visible units is intractable. mean field inference is also intractable because the variational lower bound involves taking expectations of cliques that encompass entire layers. this problem has remained [UNK] enough to restrict the popularity of directed discrete networks. one approach for performing inference in a sigmoid belief network is to construct a [UNK] lower bound that is specialized for sigmoid belief networks (, saul et al. 1996 ). this approach has only been applied to very small networks. another approach is to use learned inference mechanisms as described in section. the 19. 5 helmholtz
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 708
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
bound that is specialized for sigmoid belief networks (, saul et al. 1996 ). this approach has only been applied to very small networks. another approach is to use learned inference mechanisms as described in section. the 19. 5 helmholtz machine ( dayan 1995 dayan and hinton 1996 et al., ;, ) is a sigmoid belief network combined with an inference network that predicts the parameters of the mean field distribution over the hidden units. modern approaches (, gregor et al. 2014 mnih and gregor 2014 ;, ) to sigmoid belief networks still use this inference network approach. these techniques remain [UNK] due to the discrete nature of the latent variables. one cannot simply back - propagate through the output of the inference network, but instead must use the relatively unreliable machinery for back - propagating through discrete sampling processes, described in section. recent 20. 9. 1 approaches based on importance sampling, reweighted wake - sleep ( bornschein and bengio 2015 bornschein 2015, ) and bidirectional helmholtz machines ( et al., ) make it possible to quickly train sigmoid belief networks and reach state - of - the - art performance on benchmark tasks. a special case
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 708
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
bornschein 2015, ) and bidirectional helmholtz machines ( et al., ) make it possible to quickly train sigmoid belief networks and reach state - of - the - art performance on benchmark tasks. a special case of sigmoid belief networks is the case where there are no latent variables. learning in this case is [UNK], because there is no need to marginalize latent variables out of the likelihood. a family of models called auto - regressive networks generalize this fully visible belief network to other kinds of variables besides binary variables and other structures of conditional distributions besides log - linear relationships. auto - regressive networks are described later, in section. 20. 10. 7 693
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 708
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 20. deep generative models 20. 10. 2 [UNK] generator nets many generative models are based on the idea of using a [UNK] generator network. the model transforms samples of latent variables z to samples x or to distributions over samples x using a [UNK] function g ( z ; θ ( ) g ) which is typically represented by a neural network. this model class includes variational autoencoders, which pair the generator net with an inference net, generative adversarial networks, which pair the generator network with a discriminator network, and techniques that train generator networks in isolation. generator networks are essentially just parametrized computational procedures for generating samples, where the architecture provides the family of possible distributions to sample from and the parameters select a distribution from within that family. as an example, the standard procedure for drawing samples from a normal distribution with mean µ and covariance σ is to feed samples z from a normal distribution with zero mean and identity covariance into a very simple generator network. this generator network contains just one [UNK] layer : x z lz = ( g ) = + µ ( 20. 71 ) where is given by the cholesky decomposition of. l σ pseudorandom number generators can also use nonlinear transformations of simple distributions.
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 709
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
network contains just one [UNK] layer : x z lz = ( g ) = + µ ( 20. 71 ) where is given by the cholesky decomposition of. l σ pseudorandom number generators can also use nonlinear transformations of simple distributions. for example, inverse transform sampling ( devroye 2013, ) draws a scalar z from u ( 0, 1 ) and applies a nonlinear transformation to a scalar x. in this case g ( z ) is given by the inverse of the cumulative distribution function f ( x ) = x −∞p ( v ) dv. if we are able to specify p ( x ), integrate over x, and invert the resulting function, we can sample from without using machine learning. p x ( ) to generate samples from more complicated distributions that are [UNK] to specify directly, [UNK] to integrate over, or whose resulting integrals are [UNK] to invert, we use a feedforward network to represent a parametric family of nonlinear functions g, and use training data to infer the parameters selecting the desired function. we can think of g as providing a nonlinear change of variables that transforms the distribution over into the desired distribution over. z x recall from equation that, for invertible, [UNK], continuous, 3.
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 709
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.