text
stringlengths
35
1.54k
source
stringclasses
1 value
page
int64
1
800
book
stringclasses
1 value
chunk_index
int64
0
0
infer the parameters selecting the desired function. we can think of g as providing a nonlinear change of variables that transforms the distribution over into the desired distribution over. z x recall from equation that, for invertible, [UNK], continuous, 3. 47 g pz ( ) = z px ( ( ) ) g z det ( ∂g ∂z ). ( 20. 72 ) 694
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
709
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 20. deep generative models this implicitly imposes a probability distribution over : x px ( ) = x pz ( g−1 ( ) ) x det ( ∂g ∂z ). ( 20. 73 ) of course, this formula may be [UNK] to evaluate, depending on the choice of g, so we often use indirect means of learning g, rather than trying to maximize log ( ) p x directly. in some cases, rather than using g to provide a sample of x directly, we use g to define a conditional distribution over x. for example, we could use a generator net whose final layer consists of sigmoid outputs to provide the mean parameters of bernoulli distributions : p ( xi = 1 ) = ( ) | z g z i. ( 20. 74 ) in this case, when we use g to define p ( x z | ), we impose a distribution over x by marginalizing : z p ( ) = x ezp. ( ) x z | ( 20. 75 ) both approaches define a distribution pg ( x ) and allow us to train various criteria of pg using the reparametrization trick of section. 20. 9 the two [UNK] approaches to formulating generator nets
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
710
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
x z | ( 20. 75 ) both approaches define a distribution pg ( x ) and allow us to train various criteria of pg using the reparametrization trick of section. 20. 9 the two [UNK] approaches to formulating generator nets — emitting the parameters of a conditional distribution versus directly emitting samples — have complementary strengths and weaknesses. when the generator net defines a conditional distribution over x, it is capable of generating discrete data as well as continuous data. when the generator net provides samples directly, it is capable of generating only continuous data ( we could introduce discretization in the forward propagation, but doing so would mean the model could no longer be trained using back - propagation ). the advantage to direct sampling is that we are no longer forced to use conditional distributions whose form can be easily written down and algebraically manipulated by a human designer. approaches based on [UNK] generator networks are motivated by the success of gradient descent applied to [UNK] feedforward networks for classification. in the context of supervised learning, deep feedforward networks trained with gradient - based learning seem practically guaranteed to succeed given enough hidden units and enough training data. can this same recipe for success transfer to generative modeling? generative modeling seems to be more [UNK] than class
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
710
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
supervised learning, deep feedforward networks trained with gradient - based learning seem practically guaranteed to succeed given enough hidden units and enough training data. can this same recipe for success transfer to generative modeling? generative modeling seems to be more [UNK] than classification or regression because the learning process requires optimizing intractable criteria. in the context 695
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
710
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 20. deep generative models of [UNK] generator nets, the criteria are intractable because the data does not specify both the inputs z and the outputs x of the generator net. in the case of supervised learning, both the inputs x and the outputs y were given, and the optimization procedure needs only to learn how to produce the specified mapping. in the case of generative modeling, the learning procedure needs to determine how to arrange space in a useful way and additionally how to map from to. z z x dosovitskiy 2015 et al. ( ) studied a simplified problem, where the correspondence between z and x is given. specifically, the training data is computer - rendered imagery of chairs. the latent variables z are parameters given to the rendering engine describing the choice of which chair model to use, the position of the chair, and other configuration details that [UNK] the rendering of the image. using this synthetically generated data, a convolutional network is able to learn to map z descriptions of the content of an image to x approximations of rendered images. this suggests that contemporary [UNK] generator networks have [UNK] model capacity to be good generative models, and that contemporary optimization algorithms have the
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
711
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
##tional network is able to learn to map z descriptions of the content of an image to x approximations of rendered images. this suggests that contemporary [UNK] generator networks have [UNK] model capacity to be good generative models, and that contemporary optimization algorithms have the ability to fit them. the [UNK] lies in determining how to train generator networks when the value of z for each x is not fixed and known ahead of each time. the following sections describe several approaches to training [UNK] generator nets given only training samples of. x 20. 10. 3 variational autoencoders the variational autoencoder or vae (, ;, ) is a kingma 2013 rezende et al. 2014 directed model that uses learned approximate inference and can be trained purely with gradient - based methods. to generate a sample from the model, the vae first draws a sample z from the code distribution pmodel ( z ). the sample is then run through a [UNK] generator network g ( z ). finally, x is sampled from a distribution pmodel ( x ; g ( z ) ) = pmodel ( x z | ). however, during training, the approximate inference network ( or encoder ) q ( z x | ) is used to obtain
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
711
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
sampled from a distribution pmodel ( x ; g ( z ) ) = pmodel ( x z | ). however, during training, the approximate inference network ( or encoder ) q ( z x | ) is used to obtain z and pmodel ( x z | ) is then viewed as a decoder network. the key insight behind variational autoencoders is that they may be trained by maximizing the variational lower bound associated with data point : l ( ) q x l ( ) = q ez z x [UNK] ( | ) log pmodel ( ) + ( ( ) ) z x, h q z | x ( 20. 76 ) = ez z x [UNK] ( | ) log pmodel ( ) x z | −dkl ( ( ) q z | x | | pmodel ( ) ) z ( 20. 77 ) ≤log pmodel ( ) x. ( 20. 78 ) 696
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
711
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 20. deep generative models in equation, we recognize the first term as the joint log - likelihood of the visible 20. 76 and hidden variables under the approximate posterior over the latent variables ( just like with em, except that we use an approximate rather than the exact posterior ). we recognize also a second term, the entropy of the approximate posterior. when q is chosen to be a gaussian distribution, with noise added to a predicted mean value, maximizing this entropy term encourages increasing the standard deviation of this noise. more generally, this entropy term encourages the variational posterior to place high probability mass on many z values that could have generated x, rather than collapsing to a single point estimate of the most likely value. in equation, we recognize the first term as the reconstruction 20. 77 log - likelihood found in other autoencoders. the second term tries to make the approximate posterior distribution q ( z | x ) and the model prior p model ( z ) approach each other. traditional approaches to variational inference and learning infer q via an opti - mization algorithm, typically iterated fixed point equations ( section ). these 19. 4 approaches are slow and often require the ability to compute [UNK] log p model ( z x,
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
712
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
variational inference and learning infer q via an opti - mization algorithm, typically iterated fixed point equations ( section ). these 19. 4 approaches are slow and often require the ability to compute [UNK] log p model ( z x, ) in closed form. the main idea behind the variational autoencoder is to train a parametric encoder ( also sometimes called an inference network or recognition model ) that produces the parameters of q. so long as z is a continuous variable, we can then back - propagate through samples of z drawn from q ( z x | ) = q ( z ; f ( x ; θ ) ) in order to obtain a gradient with respect to θ. learning then consists solely of maximizing l with respect to the parameters of the encoder and decoder. all of the expectations in may be approximated by monte carlo sampling. l the variational autoencoder approach is elegant, theoretically pleasing, and simple to implement. it also obtains excellent results and is among the state of the art approaches to generative modeling. its main drawback is that samples from variational autoencoders trained on images tend to be somewhat blurry. the causes of this phenomenon are not yet known. one
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
712
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
results and is among the state of the art approaches to generative modeling. its main drawback is that samples from variational autoencoders trained on images tend to be somewhat blurry. the causes of this phenomenon are not yet known. one possibility is that the blurriness is an intrinsic [UNK] of maximum likelihood, which minimizes dkl ( pdatapmodel ). as illustrated in figure, this means that the model will assign high probability to 3. 6 points that occur in the training set, but may also assign high probability to other points. these other points may include blurry images. part of the reason that the model would choose to put probability mass on blurry images rather than some other part of the space is that the variational autoencoders used in practice usually have a gaussian distribution for pmodel ( x ; g ( z ) ). maximizing a lower bound on the likelihood of such a distribution is similar to training a traditional autoencoder with mean squared error, in the sense that it has a tendency to ignore features of the input that occupy few pixels or that cause only a small change in the brightness of the pixels that they occupy. this issue is not specific to vaes
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
712
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
mean squared error, in the sense that it has a tendency to ignore features of the input that occupy few pixels or that cause only a small change in the brightness of the pixels that they occupy. this issue is not specific to vaes and 697
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
712
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 20. deep generative models is shared with generative models that optimize a log - likelihood, or equivalently, dkl ( pdatapmodel ), as argued by ( ) and by ( ). another theis et al. 2015 huszar 2015 troubling issue with contemporary vae models is that they tend to use only a small subset of the dimensions of z, as if the encoder was not able to transform enough of the local directions in input space to a space where the marginal distribution matches the factorized prior. the vae framework is very straightforward to extend to a wide range of model architectures. this is a key advantage over boltzmann machines, which require extremely careful model design to maintain tractability. vaes work very well with a diverse family of [UNK] operators. one particularly sophisticated vae is the deep recurrent attention writer or draw model (, ). gregor et al. 2015 draw uses a recurrent encoder and recurrent decoder combined with an attention mechanism. the generation process for the draw model consists of sequentially visiting [UNK] small image patches and drawing the values of the pixels at those points. vaes can also be extended to generate sequences by defining variational rnns (
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
713
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
with an attention mechanism. the generation process for the draw model consists of sequentially visiting [UNK] small image patches and drawing the values of the pixels at those points. vaes can also be extended to generate sequences by defining variational rnns (, ) by using a recurrent encoder and decoder within chung et al. 2015b the vae framework. generating a sample from a traditional rnn involves only non - deterministic operations at the output space. variational rnns also have random variability at the potentially more abstract level captured by the vae latent variables. the vae framework has been extended to maximize not just the traditional variational lower bound, but instead the importance weighted autoencoder (, ) objective : burda et al. 2015 lk ( ) = x, q ez ( 1 ),..., z ( ) k [UNK] | q ( z x ) log 1 k k i = 1 pmodel ( x z, ( ) i ) q ( z ( ) i | x ). ( 20. 79 ) this new objective is equivalent to the traditional lower bound l when k = 1. however, it may also be interpreted as forming an estimate of the true log pmodel ( x )
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
713
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
z ( ) i | x ). ( 20. 79 ) this new objective is equivalent to the traditional lower bound l when k = 1. however, it may also be interpreted as forming an estimate of the true log pmodel ( x ) using importance sampling of z from proposal distribution q ( z x | ). the importance weighted autoencoder objective is also a lower bound on log pmodel ( x ) and becomes tighter as increases. k variational autoencoders have some interesting connections to the mp - dbm and other approaches that involve back - propagation through the approximate inference graph ( goodfellow 2013b stoyanov 2011 brakel 2013 et al., ; et al., ; et al., ). these previous approaches required an inference procedure such as mean field fixed point equations to provide the computational graph. the variational autoencoder is defined for arbitrary computational graphs, which makes it applicable to a wider range of probabilistic model families because there is no need to restrict the choice 698
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
713
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 20. deep generative models of models to those with tractable mean field fixed point equations. the variational autoencoder also has the advantage that it increases a bound on the log - likelihood of the model, while the criteria for the mp - dbm and related models are more heuristic and have little probabilistic interpretation beyond making the results of approximate inference accurate. one disadvantage of the variational autoencoder is that it learns an inference network for only one problem, inferring z given x. the older methods are able to perform approximate inference over any subset of variables given any other subset of variables, because the mean field fixed point equations specify how to share parameters between the computational graphs for all of these [UNK] problems. one very nice property of the variational autoencoder is that simultaneously training a parametric encoder in combination with the generator network forces the model to learn a predictable coordinate system that the encoder can capture. this makes it an excellent manifold learning algorithm. see figure for examples of 20. 6 low - dimensional manifolds learned by the variational autoencoder. in one of the cases demonstrated in the figure, the algorithm discovered two independent factors of variation present
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
714
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
manifold learning algorithm. see figure for examples of 20. 6 low - dimensional manifolds learned by the variational autoencoder. in one of the cases demonstrated in the figure, the algorithm discovered two independent factors of variation present in images of faces : angle of rotation and emotional expression. 20. 10. 4 generative adversarial networks generative adversarial networks or gans (, ) are another goodfellow et al. 2014c generative modeling approach based on [UNK] generator networks. generative adversarial networks are based on a game theoretic scenario in which the generator network must compete against an adversary. the generator network directly produces samplesx = g ( z ; θ ( ) g ). its adversary, the discriminator network, attempts to distinguish between samples drawn from the training data and samples drawn from the generator. the discriminator emits a probability value given by d ( x ; θ ( ) d ), indicating the probability that x is a real training example rather than a fake sample drawn from the model. the simplest way to formulate learning in generative adversarial networks is as a zero - sum game, in which a function v ( θ ( ) g, θ ( )
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
714
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
real training example rather than a fake sample drawn from the model. the simplest way to formulate learning in generative adversarial networks is as a zero - sum game, in which a function v ( θ ( ) g, θ ( ) d ) determines the [UNK] the discriminator. the generator receives −v ( θ ( ) g, θ ( ) d ) as its own [UNK]. during learning, each player attempts to maximize its own [UNK], so that at convergence g∗ = arg min g max d v g, d. ( ) ( 20. 80 ) the default choice for is v v ( θ ( ) g, θ ( ) d ) = [UNK] ( ) + d x e [UNK] log ( 1 ( ) ) −d x. ( 20. 81 ) 699
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
714
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 20. deep generative models figure 20. 6 : examples of two - dimensional coordinate systems for high - dimensional mani - folds, learned by a variational autoencoder ( kingma and welling 2014a, ). two dimensions may be plotted directly on the page for visualization, so we can gain an understanding of how the model works by training a model with a 2 - d latent code, even if we believe the intrinsic dimensionality of the data manifold is much higher. the images shown are not examples from the training set but images x actually generated by the model p ( x z | ), simply by changing the 2 - d “ code ” z ( each image corresponds to a [UNK] choice of “ code ” z on a 2 - d uniform grid ). ( left ) the two - dimensional map of the frey faces manifold. one dimension that has been discovered ( horizontal ) mostly corresponds to a rotation of the face, while the other ( vertical ) corresponds to the emotional expression. the ( right ) two - dimensional map of the mnist manifold. this drives the discriminator to attempt to learn to correctly classify samples as real or fake. simultaneously, the generator attempts to fool the classifier into believing its samples are real. at convergence
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
715
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
two - dimensional map of the mnist manifold. this drives the discriminator to attempt to learn to correctly classify samples as real or fake. simultaneously, the generator attempts to fool the classifier into believing its samples are real. at convergence, the generator ’ s samples are indistinguishable from real data, and the discriminator outputs 1 2 everywhere. the discriminator may then be discarded. the main motivation for the design of gans is that the learning process requires neither approximate inference nor approximation of a partition function gradient. in the case where maxd v ( g, d ) is convex in θ ( ) g ( such as the case where optimization is performed directly in the space of probability density functions ) the procedure is guaranteed to converge and is asymptotically consistent. unfortunately, learning in gans can be [UNK] in practice when g and d are represented by neural networks and max d v ( g, d ) is not convex. goodfellow 700
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
715
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 20. deep generative models ( ) identified non - convergence as an issue that may cause gans to underfit. 2014 in general, simultaneous gradient descent on two players ’ costs is not guaranteed to reach an equilibrium. consider for example the value function v ( a, b ) = ab, where one player controls a and incurs cost ab, while the other player controls b and receives a cost −ab. if we model each player as making infinitesimally small gradient steps, each player reducing their own cost at the expense of the other player, then a and b go into a stable, circular orbit, rather than arriving at the equilibrium point at the origin. note that the equilibria for a minimax game are not local minima of v. instead, they are points that are simultaneously minima for both players ’ costs. this means that they are saddle points of v that are local minima with respect to the first player ’ s parameters and local maxima with respect to the second player ’ s parameters. it is possible for the two players to take turns increasing then decreasing v forever, rather than landing exactly on the saddle point where neither player is capable of reducing its cost. it is not known to
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
716
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
##a with respect to the second player ’ s parameters. it is possible for the two players to take turns increasing then decreasing v forever, rather than landing exactly on the saddle point where neither player is capable of reducing its cost. it is not known to what extent this non - convergence problem [UNK] gans. goodfellow 2014 ( ) identified an alternative formulation of the [UNK], in which the game is no longer zero - sum, that has the same expected gradient as maximum likelihood learning whenever the discriminator is optimal. because maximum likelihood training converges, this reformulation of the gan game should also converge, given enough samples. unfortunately, this alternative formulation does not seem to improve convergence in practice, possibly due to suboptimality of the discriminator, or possibly due to high variance around the expected gradient. in realistic experiments, the best - performing formulation of the gan game is a [UNK] formulation that is neither zero - sum nor equivalent to maximum likelihood, introduced by ( ) with a heuristic motivation. in goodfellow et al. 2014c this best - performing formulation, the generator aims to increase the log probability that the discriminator makes a mistake, rather than aiming to decrease the log probability that the discriminator makes the
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
716
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
##stic motivation. in goodfellow et al. 2014c this best - performing formulation, the generator aims to increase the log probability that the discriminator makes a mistake, rather than aiming to decrease the log probability that the discriminator makes the correct prediction. this reformulation is motivated solely by the observation that it causes the derivative of the generator ’ s cost function with respect to the discriminator ’ s logits to remain large even in the situation where the discriminator confidently rejects all generator samples. stabilization of gan learning remains an open problem. fortunately, gan learning performs well when the model architecture and hyperparameters are care - fully selected. ( ) crafted a deep convolutional gan ( dcgan ) radford et al. 2015 that performs very well for image synthesis tasks, and showed that its latent repre - sentation space captures important factors of variation, as shown in figure. 15. 9 see figure for examples of images generated by a dcgan generator. 20. 7 the gan learning problem can also be simplified by breaking the generation 701
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
716
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 20. deep generative models figure 20. 7 : images generated by gans trained on the lsun dataset. ( left ) images of bedrooms generated by a dcgan model, reproduced with permission from radford et al. ( ). images of churches generated by a lapgan model, reproduced with 2015 ( right ) permission from ( ). denton et al. 2015 process into many levels of detail. it is possible to train conditional gans ( mirza and osindero 2014, ) that learn to sample from a distribution p ( x y | ) rather than simply sampling from a marginal distribution p ( x ). ( ) denton et al. 2015 showed that a series of conditional gans can be trained to first generate a very low - resolution version of an image, then incrementally add details to the image. this technique is called the lapgan model, due to the use of a laplacian pyramid to generate the images containing varying levels of detail. lapgan generators are able to fool not only discriminator networks but also human observers, with experimental subjects identifying up to 40 % of the outputs of the network as being real data. see figure for examples of images generated by a lapgan 20. 7 generator. one unusual
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
717
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
not only discriminator networks but also human observers, with experimental subjects identifying up to 40 % of the outputs of the network as being real data. see figure for examples of images generated by a lapgan 20. 7 generator. one unusual capability of the gan training procedure is that it can fit proba - bility distributions that assign zero probability to the training points. rather than maximizing the log probability of specific points, the generator net learns to trace out a manifold whose points resemble training points in some way. somewhat para - doxically, this means that the model may assign a log - likelihood of negative infinity to the test set, while still representing a manifold that a human observer judges to capture the essence of the generation task. this is not clearly an advantage or a disadvantage, and one may also guarantee that the generator network assigns non - zero probability to all points simply by making the last layer of the generator network add gaussian noise to all of the generated values. generator networks that add gaussian noise in this manner sample from the same distribution that one obtains by using the generator network to parametrize the mean of a conditional 702
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
717
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 20. deep generative models gaussian distribution. dropout seems to be important in the discriminator network. in particular, units should be stochastically dropped while computing the gradient for the generator network to follow. following the gradient of the deterministic version of the discriminator with its weights divided by two does not seem to be as [UNK]. likewise, never using dropout seems to yield poor results. while the gan framework is designed for [UNK] generator networks, similar principles can be used to train other kinds of models. for example, self - supervised boosting can be used to train an rbm generator to fool a logistic regression discriminator ( welling 2002 et al., ). 20. 10. 5 generative moment matching networks generative moment matching networks (, ;, li et al. 2015 dziugaite et al. 2015 ) are another form of generative model based on [UNK] generator networks. unlike vaes and gans, they do not need to pair the generator network with any other network — neither an inference network as used with vaes nor a discriminator network as used with gans. these networks are trained with a technique called moment matching. the basic idea behind moment matching is to train the generator
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
718
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
generator network with any other network — neither an inference network as used with vaes nor a discriminator network as used with gans. these networks are trained with a technique called moment matching. the basic idea behind moment matching is to train the generator in such a way that many of the statistics of samples generated by the model are as similar as possible to those of the statistics of the examples in the training set. in this context, a moment is an expectation of [UNK] powers of a random variable. for example, the first moment is the mean, the second moment is the mean of the squared values, and so on. in multiple dimensions, each element of the random vector may be raised to [UNK] powers, so that a moment may be any quantity of the form exπixni i ( 20. 82 ) where n = [ n1, n2,..., nd ] is a vector of non - negative integers. upon first examination, this approach seems to be computationally infeasible. for example, if we want to match all the moments of the form xix j, then we need to minimize the [UNK] between a number of values that is quadratic in the dimension of x. moreover, even matching all of the first
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
718
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
. for example, if we want to match all the moments of the form xix j, then we need to minimize the [UNK] between a number of values that is quadratic in the dimension of x. moreover, even matching all of the first and second moments would only be [UNK] to fit a multivariate gaussian distribution, which captures only linear relationships between values. our ambitions for neural networks are to capture complex nonlinear relationships, which would require far more moments. gans avoid this problem of exhaustively enumerating all moments by using a 703
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
718
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 20. deep generative models dynamically updated discriminator that automatically focuses its attention on whichever statistic the generator network is matching the least [UNK]. instead, generative moment matching networks can be trained by minimizing a cost function called maximum mean discrepancy ( scholkopf and smola, 2002 gretton 2012 ; et al., ) or mmd. this cost function measures the error in the first moments in an infinite - dimensional space, using an implicit mapping to feature space defined by a kernel function in order to make computations on infinite - dimensional vectors tractable. the mmd cost is zero if and only if the two distributions being compared are equal. visually, the samples from generative moment matching networks are somewhat disappointing. fortunately, they can be improved by combining the generator network with an autoencoder. first, an autoencoder is trained to reconstruct the training set. next, the encoder of the autoencoder is used to transform the entire training set into code space. the generator network is then trained to generate code samples, which may be mapped to visually pleasing samples via the decoder. unlike gans, the cost function is defi
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
719
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
autoencoder is used to transform the entire training set into code space. the generator network is then trained to generate code samples, which may be mapped to visually pleasing samples via the decoder. unlike gans, the cost function is defined only with respect to a batch of examples from both the training set and the generator network. it is not possible to make a training update as a function of only one training example or only one sample from the generator network. this is because the moments must be computed as an empirical average across many samples. when the batch size is too small, mmd can underestimate the true amount of variation in the distributions being sampled. no finite batch size is [UNK] large to eliminate this problem entirely, but larger batches reduce the amount of underestimation. when the batch size is too large, the training procedure becomes infeasibly slow, because many examples must be processed in order to compute a single small gradient step. as with gans, it is possible to train a generator net using mmd even if that generator net assigns zero probability to the training points. 20. 10. 6 convolutional generative networks when generating images, it is often useful to use a generator network that includes a
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
719
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
to train a generator net using mmd even if that generator net assigns zero probability to the training points. 20. 10. 6 convolutional generative networks when generating images, it is often useful to use a generator network that includes a convolutional structure ( see for example goodfellow 2014c dosovitskiy et al. ( ) or et al. ( ) ). to do so, we use the “ transpose ” of the convolution operator, 2015 described in section. this approach often yields more realistic images and does 9. 5 so using fewer parameters than using fully connected layers without parameter sharing. convolutional networks for recognition tasks have information flow from the image to some summarization layer at the top of the network, often a class label. 704
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
719
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 20. deep generative models as this image flows upward through the network, information is discarded as the representation of the image becomes more invariant to nuisance transformations. in a generator network, the opposite is true. rich details must be added as the representation of the image to be generated propagates through the network, culminating in the final representation of the image, which is of course the image itself, in all of its detailed glory, with object positions and poses and textures and lighting. the primary mechanism for discarding information in a convolutional recognition network is the pooling layer. the generator network seems to need to add information. we cannot put the inverse of a pooling layer into the generator network because most pooling functions are not invertible. a simpler operation is to merely increase the spatial size of the representation. an approach that seems to perform acceptably is to use an “ un - pooling ” as introduced by dosovitskiy et al. ( ). this layer corresponds to the inverse of the max - pooling operation under 2015 certain simplifying conditions. first, the stride of the max - pooling operation is constrained to be equal to the width of the pooling region. second, the maximum input within each
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
720
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
to the inverse of the max - pooling operation under 2015 certain simplifying conditions. first, the stride of the max - pooling operation is constrained to be equal to the width of the pooling region. second, the maximum input within each pooling region is assumed to be the input in the upper - left corner. finally, all non - maximal inputs within each pooling region are assumed to be zero. these are very strong and unrealistic assumptions, but they do allow the max - pooling operator to be inverted. the inverse un - pooling operation allocates a tensor of zeros, then copies each value from spatial coordinate i of the input to spatial coordinate i k × of the output. the integer value k defines the size of the pooling region. even though the assumptions motivating the definition of the un - pooling operator are unrealistic, the subsequent layers are able to learn to compensate for its unusual output, so the samples generated by the model as a whole are visually pleasing. 20. 10. 7 auto - regressive networks auto - regressive networks are directed probabilistic models with no latent random variables. the conditional probability distributions in these models are represented by neural networks (
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
720
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
a whole are visually pleasing. 20. 10. 7 auto - regressive networks auto - regressive networks are directed probabilistic models with no latent random variables. the conditional probability distributions in these models are represented by neural networks ( sometimes extremely simple neural networks such as logistic regression ). the graph structure of these models is the complete graph. they decompose a joint probability over the observed variables using the chain rule of probability to obtain a product of conditionals of the form p ( xd | xd−1,..., x1 ). such models have been called fully - visible bayes networks ( fvbns ) and used successfully in many forms, first with logistic regression for each conditional distribution ( frey 1998, ) and then with neural networks with hidden units ( bengio and bengio 2000b larochelle and murray 2011, ;, ). in some forms of auto - regressive networks, such as nade (, ), described larochelle and murray 2011 705
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
720
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 20. deep generative models in section below, we can introduce a form of parameter sharing that 20. 10. 10 brings both a statistical advantage ( fewer unique parameters ) and a computational advantage ( less computation ). this is one more instance of the recurring deep learning motif of reuse of features. x1 x1 x2 x2 x3 x3 x4 x4 p x ( 4 | x1, x2, x3 ) p x ( 4 | x1, x2, x3 ) p x ( 3 | x1, x2 ) p x ( 3 | x1, x2 ) p x ( 2 | x1 ) p x ( 2 | x1 ) p x ( 1 ) p x ( 1 ) x1 x1 x2 x2 x3 x3 x4 x4 figure 20. 8 : a fully visible belief network predicts the i - th variable from the i −1 previous ones. ( top ) ( bottom ) the directed graphical model for an fvbn. corresponding computational graph, in the case of the logistic fvbn, where each prediction is made by a linear predictor. 20. 10. 8 linear auto - regressive networks the simplest form of auto - regress
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
721
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
fvbn. corresponding computational graph, in the case of the logistic fvbn, where each prediction is made by a linear predictor. 20. 10. 8 linear auto - regressive networks the simplest form of auto - regressive network has no hidden units and no sharing of parameters or features. each p ( xi | xi−1,..., x1 ) is parametrized as a linear model ( linear regression for real - valued data, logistic regression for binary data, softmax regression for discrete data ). this model was introduced by frey 1998 ( ) and has o ( d2 ) parameters when there are d variables to model. it is illustrated in figure. 20. 8 if the variables are continuous, a linear auto - regressive model is merely another way to formulate a multivariate gaussian distribution, capturing linear pairwise interactions between the observed variables. linear auto - regressive networks are essentially the generalization of linear classification methods to generative modeling. they therefore have the same 706
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
721
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 20. deep generative models advantages and disadvantages as linear classifiers. like linear classifiers, they may be trained with convex loss functions, and sometimes admit closed form solutions ( as in the gaussian case ). like linear classifiers, the model itself does not [UNK] a way of increasing its capacity, so capacity must be raised using techniques like basis expansions of the input or the kernel trick. x1 x1 x2 x2 x3 x3 x4 x4 h1 h1 h2 h2 h3 h3 p x ( 4 | x1, x2, x3 ) p x ( 4 | x1, x2, x3 ) p x ( 3 | x1, x2 ) p x ( 3 | x1, x2 ) p x ( 2 | x1 ) p x ( 2 | x1 ) p x ( 1 ) p x ( 1 ) figure 20. 9 : a neural auto - regressive network predicts thei - th variable xi from the i −1 previous ones, but is parametrized so that features ( groups of hidden units denotedhi ) that are functions of x1,..., xi can be reused in predicting all
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
722
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
predicts thei - th variable xi from the i −1 previous ones, but is parametrized so that features ( groups of hidden units denotedhi ) that are functions of x1,..., xi can be reused in predicting all of the subsequent variables xi + 1, xi + 2,..., xd. 20. 10. 9 neural auto - regressive networks neural auto - regressive networks (,, ) have the same bengio and bengio 2000a b left - to - right graphical model as logistic auto - regressive networks ( figure ) but 20. 8 employ a [UNK] parametrization of the conditional distributions within that graphical model structure. the new parametrization is more powerful in the sense that its capacity can be increased as much as needed, allowing approximation of any joint distribution. the new parametrization can also improve generalization by introducing a parameter sharing and feature sharing principle common to deep learning in general. the models were motivated by the objective of avoiding the curse of dimensionality arising out of traditional tabular graphical models, sharing the same structure as figure. in tabular discrete probabilistic models, each 20. 8 conditional distribution is represented by a table of probabilities
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
722
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
of avoiding the curse of dimensionality arising out of traditional tabular graphical models, sharing the same structure as figure. in tabular discrete probabilistic models, each 20. 8 conditional distribution is represented by a table of probabilities, with one entry and one parameter for each possible configuration of the variables involved. by using a neural network instead, two advantages are obtained : 707
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
722
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 20. deep generative models 1. the parametrization of each p ( xi | xi−1,..., x1 ) by a neural network with ( i −1 ) × k inputs and k outputs ( if the variables are discrete and take k values, encoded one - hot ) allows one to estimate the conditional probability without requiring an exponential number of parameters ( and examples ), yet still is able to capture high - order dependencies between the random variables. 2. instead of having a [UNK] neural network for the prediction of each xi, a connectivity illustrated in figure allows one to merge all left - to - right 20. 9 the neural networks into one. equivalently, it means that the hidden layer features computed for predicting xi can be reused for predicting xi k + ( k > 0 ). the hidden units are thus organized in groups that have the particularity that all the units in the i - th group only depend on the input values x1,..., xi. the parameters used to compute these hidden units are jointly optimized to improve the prediction of all the variables in the sequence. this is an instance of the reuse principle that recurs throughout deep learning in scenarios ranging from recurrent and convolutional
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
723
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
used to compute these hidden units are jointly optimized to improve the prediction of all the variables in the sequence. this is an instance of the reuse principle that recurs throughout deep learning in scenarios ranging from recurrent and convolutional network architectures to multi - task and transfer learning. each p ( xi | xi−1,..., x1 ) can represent a conditional distribution by having outputs of the neural network predict parameters of the conditional distribution of xi, as discussed in section. although the original neural auto - regressive 6. 2. 1. 1 networks were initially evaluated in the context of purely discrete multivariate data ( with a sigmoid output for a bernoulli variable or softmax output for a multinoulli variable ) it is natural to extend such models to continuous variables or joint distributions involving both discrete and continuous variables. 20. 10. 10 nade the neural autoregressive density estimator ( nade ) is a very successful recent form of neural auto - regressive network ( larochelle and murray 2011, ). the connectivity is the same as for the original neural auto - regressive network of bengio and bengio 2000b ( ) but nade introduces an additional
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
723
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
neural auto - regressive network ( larochelle and murray 2011, ). the connectivity is the same as for the original neural auto - regressive network of bengio and bengio 2000b ( ) but nade introduces an additional parameter sharing scheme, as illustrated in figure. the parameters of the hidden units of [UNK] groups 20. 10 j are shared. the weights w j, k, i from the i - th input xi to the k - th element of the j - th group of hidden unit h ( ) j k ( ) are shared among the groups : j i ≥ w j, k, i = wk, i. ( 20. 83 ) the remaining weights, where, are zero. j < i 708
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
723
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 20. deep generative models x1 x1 x2 x2 x3 x3 x4 x4 h1 h1 h2 h2 h3 h3 p x ( 4 | x1, x2, x3 ) p x ( 4 | x1, x2, x3 ) p x ( 3 | x1, x2 ) p x ( 3 | x1, x2 ) p x ( 2 | x1 ) p x ( 2 | x1 ) p x ( 1 ) p x ( 1 ) w : 1, w : 1, w : 1, w : 2, w : 2, w : 3, figure 20. 10 : an illustration of the neural autoregressive density estimator ( nade ). the hidden units are organized in groupsh ( ) j so that only the inputs x1,..., xi participate in computing h ( ) i and predicting p ( xj | xj−1,..., x1 ), for j > i. nade is [UNK] from earlier neural auto - regressive networks by the use of a particular weight sharing pattern : w j, k, i = wk, i is shared ( indicated in the figur
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
724
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
##1 ), for j > i. nade is [UNK] from earlier neural auto - regressive networks by the use of a particular weight sharing pattern : w j, k, i = wk, i is shared ( indicated in the figure by the use of the same line pattern for every instance of a replicated weight ) for all the weights going out fromxi to the k - th unit of any group. recall that the vector j i ≥ ( w1, i, w2, i,..., wn, i ) is denoted w :, i. larochelle and murray 2011 ( ) chose this sharing scheme so that forward propagation in a nade model loosely resembles the computations performed in mean field inference to fill in missing inputs in an rbm. this mean field inference corresponds to running a recurrent network with shared weights and the first step of that inference is the same as in nade. the only [UNK] is that with nade, the output weights connecting the hidden units to the output are parametrized independently from the weights connecting the input units to the hidden units. in the rbm, the hidden - to - output weights are the transpose of the input - to - hidden weights.
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
724
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
weights connecting the hidden units to the output are parametrized independently from the weights connecting the input units to the hidden units. in the rbm, the hidden - to - output weights are the transpose of the input - to - hidden weights. the nade architecture can be extended to mimic not just one time step of the mean field recurrent inference but to mimic k steps. this approach is called nade - (, ). k raiko et al. 2014 as mentioned previously, auto - regressive networks may be extend to process continuous - valued data. a particularly powerful and generic way of parametrizing a continuous density is as a gaussian mixture ( introduced in section ) with 3. 9. 6 mixture weights αi ( the [UNK] or prior probability for component i ), per - component conditional mean µi and per - component conditional variance σ2 i. a model called rnade (, ) uses this parametrization to extend nade uria et al. 2013 to real values. as with other mixture density networks, the parameters of this 709
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
724
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 20. deep generative models distribution are outputs of the network, with the mixture weight probabilities produced by a softmax unit, and the variances parametrized so that they are positive. stochastic gradient descent can be numerically ill - behaved due to the interactions between the conditional means µi and the conditional variances σ2 i. to reduce this [UNK], ( ) use a pseudo - gradient that replaces the uria et al. 2013 gradient on the mean, in the back - propagation phase. another very interesting extension of the neural auto - regressive architectures gets rid of the need to choose an arbitrary order for the observed variables ( murray and larochelle 2014, ). in auto - regressive networks, the idea is to train the network to be able to cope with any order by randomly sampling orders and providing the information to hidden units specifying which of the inputs are observed ( on the right side of the conditioning bar ) and which are to be predicted and are thus considered missing ( on the left side of the conditioning bar ). this is nice because it allows one to use a trained auto - regressive network to perform any inference problem ( i. e. predict or sample from the probability distribution over any subset of
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
725
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
missing ( on the left side of the conditioning bar ). this is nice because it allows one to use a trained auto - regressive network to perform any inference problem ( i. e. predict or sample from the probability distribution over any subset of variables given any subset ) extremely [UNK]. finally, since many orders of variables are possible ( n! for n variables ) and each order o of variables yields a [UNK], we can form an ensemble of models for many values of : p o ( x | ) o pensemble ( ) = x 1 k k i = 1 p o ( x | ( ) i ). ( 20. 84 ) this ensemble model usually generalizes better and assigns higher probability to the test set than does an individual model defined by a single ordering. in the same paper, the authors propose deep versions of the architecture, but unfortunately that immediately makes computation as expensive as in the original neural auto - regressive neural network (, ). the first layer bengio and bengio 2000b and the output layer can still be computed in o ( nh ) multiply - add operations, as in the regular nade, where h is the number of hidden units ( the size of the groups hi, in figures and )
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
725
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
2000b and the output layer can still be computed in o ( nh ) multiply - add operations, as in the regular nade, where h is the number of hidden units ( the size of the groups hi, in figures and ), whereas it is 20. 10 20. 9 o ( n2h ) in bengio and bengio ( ). however, for the other hidden layers, the computation is 2000b o ( n2h2 ) if every “ previous ” group at layer l participates in predicting the “ next ” group at layer l + 1, assuming n groups of h hidden units at each layer. making the i - th group at layer l + 1 only depend on the i - th group, as in murray and larochelle 2014 ( ) at layer l reduces it to o nh ( 2 ), which is still times worse than the regular nade. h 710
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
725
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 20. deep generative models 20. 11 drawing samples from autoencoders in chapter, we saw that many kinds of autoencoders learn the data distribution. 14 there are close connections between score matching, denoising autoencoders, and contractive autoencoders. these connections demonstrate that some kinds of autoencoders learn the data distribution in some way. we have not yet seen how to draw samples from such models. some kinds of autoencoders, such as the variational autoencoder, explicitly represent a probability distribution and admit straightforward ancestral sampling. most other kinds of autoencoders require mcmc sampling. contractive autoencoders are designed to recover an estimate of the tangent plane of the data manifold. this means that repeated encoding and decoding with injected noise will induce a random walk along the surface of the manifold ( rifai et al. et al., ; 2012 mesnil, ). this manifold [UNK] technique is a kind of 2012 markov chain. there is also a more general markov chain that can sample from any denoising autoencoder. 20. 11. 1 markov chain associated with any denoising autoen - coder the above discussion left open
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
726
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
markov chain. there is also a more general markov chain that can sample from any denoising autoencoder. 20. 11. 1 markov chain associated with any denoising autoen - coder the above discussion left open the question of what noise to inject and where, in order to obtain a markov chain that would generate from the distribution estimated by the autoencoder. ( ) showed how to construct bengio et al. 2013c such a markov chain for generalized denoising autoencoders. generalized denoising autoencoders are specified by a denoising distribution for sampling an estimate of the clean input given the corrupted input. each step of the markov chain that generates from the estimated distribution consists of the following sub - steps, illustrated in figure : 20. 11 1. starting from the previous state x, inject corruption noise, sampling [UNK] from c ( [UNK] x | ). 2. encode [UNK] into h = ( f [UNK] ). 3. decode to obtain the parameters of h ω h = ( g ) p g p ( = x | ω ( ) ) = h ( x | [UNK] ). 4. sample the next state from x p g p
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
726
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
f [UNK] ). 3. decode to obtain the parameters of h ω h = ( g ) p g p ( = x | ω ( ) ) = h ( x | [UNK] ). 4. sample the next state from x p g p ( = x | ω ( ) ) = h ( x | [UNK] ). 711
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
726
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 20. deep generative models x [UNK] h ω [UNK] c ( [UNK] x | ) p ( ) x | ω f g figure 20. 11 : each step of the markov chain associated with a trained denoising autoen - coder, that generates the samples from the probabilistic model implicitly trained by the denoising log - likelihood criterion. each step consists in ( a ) injecting noise via corruption process c in state x, yielding [UNK], ( b ) encoding it with function f, yielding h = f ( [UNK] ), ( c ) decoding the result with function g, yielding parameters ω for the reconstruction distribution, and ( d ) given ω, sampling a new state from the reconstruction distribution p ( x | ω = g ( f ( [UNK] ) ) ). in the typical squared reconstruction error case, g ( h ) = [UNK], which estimates e [ x | [UNK] ], corruption consists in adding gaussian noise and sampling from p ( x | ω ) consists in adding gaussian noise, a second time, to the reconstruction [UNK]. the latter noise level should correspond to the mean squared error of reconstructions, whereas the injected noise is a hyperparameter that controls the mixing speed as well as the extent to
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
727
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
adding gaussian noise, a second time, to the reconstruction [UNK]. the latter noise level should correspond to the mean squared error of reconstructions, whereas the injected noise is a hyperparameter that controls the mixing speed as well as the extent to which the estimator smooths the empirical distribution (, ). in the vincent 2011 example illustrated here, only thec and p conditionals are stochastic steps ( f and g are deterministic computations ), although noise can also be injected inside the autoencoder, as in generative stochastic networks (, ). bengio et al. 2014 712
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
727
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 20. deep generative models bengio 2014 et al. ( ) showed that if the autoencoder p ( x | [UNK] ) forms a consistent estimator of the corresponding true conditional distribution, then the stationary distribution of the above markov chain forms a consistent estimator ( albeit an implicit one ) of the data generating distribution of. x 20. 11. 2 clamping and conditional sampling similarly to boltzmann machines, denoising autoencoders and their generalizations ( such as gsns, described below ) can be used to sample from a conditional distri - bution p ( xf | xo ), simply by clamping the observed units xf and only resampling the free units xo given xf and the sampled latent variables ( if any ). for example, mp - dbms can be interpreted as a form of denoising autoencoder, and are able to sample missing inputs. gsns later generalized some of the ideas present in mp - dbms to perform the same operation (, ). ( ) bengio et al. 2014 alain et al. 2015 identified a missing condition from proposition 1 of ( ), which is bengio et al. 2014
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
728
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
in mp - dbms to perform the same operation (, ). ( ) bengio et al. 2014 alain et al. 2015 identified a missing condition from proposition 1 of ( ), which is bengio et al. 2014 that the transition operator ( defined by the stochastic mapping going from one state of the chain to the next ) should satisfy a property called detailed balance, which specifies that a markov chain at equilibrium will remain in equilibrium whether the transition operator is run in forward or reverse. an experiment in clamping half of the pixels ( the right part of the image ) and running the markov chain on the other half is shown in figure. 20. 12 713
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
728
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 20. deep generative models figure 20. 12 : illustration of clamping the right half of the image and running the markov chain by resampling only the left half at each step. these samples come from a gsn trained to reconstruct mnist digits at each time step using the walkback procedure. 20. 11. 3 walk - back training procedure the walk - back training procedure was proposed by ( ) as a way bengio et al. 2013c to accelerate the convergence of generative training of denoising autoencoders. instead of performing a one - step encode - decode reconstruction, this procedure consists in alternative multiple stochastic encode - decode steps ( as in the generative markov chain ) initialized at a training example ( just like with the contrastive divergence algorithm, described in section ) and penalizing the last probabilistic 18. 2 reconstructions ( or all of the reconstructions along the way ). training with k steps is equivalent ( in the sense of achieving the same stationary distribution ) as training with one step, but practically has the advantage that spurious modes further from the data can be removed more [UNK]. 20. 12 generative stochastic networks generative stochastic
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
729
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
( in the sense of achieving the same stationary distribution ) as training with one step, but practically has the advantage that spurious modes further from the data can be removed more [UNK]. 20. 12 generative stochastic networks generative stochastic networks or gsns (, ) are generaliza - bengio et al. 2014 tions of denoising autoencoders that include latent variables h in the generative 714
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
729
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 20. deep generative models markov chain, in addition to the visible variables ( usually denoted ). x a gsn is parametrized by two conditional probability distributions which specify one step of the markov chain : 1. p ( x ( ) k | h ( ) k ) tells how to generate the next visible variable given the current latent state. such a “ reconstruction distribution ” is also found in denoising autoencoders, rbms, dbns and dbms. 2. p ( h ( ) k | h ( 1 ) k−, x ( 1 ) k− ) tells how to update the latent state variable, given the previous latent state and visible variable. denoising autoencoders and gsns [UNK] from classical probabilistic models ( directed or undirected ) in that they parametrize the generative process itself rather than the mathematical specification of the joint distribution of visible and latent variables. instead, the latter is defined,, as the stationary implicitly if it exists distribution of the generative markov chain. the conditions for existence of the stationary distribution are mild and are the same conditions required by standard mcmc methods ( see section ). these conditions are necessary
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
730
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
##ned,, as the stationary implicitly if it exists distribution of the generative markov chain. the conditions for existence of the stationary distribution are mild and are the same conditions required by standard mcmc methods ( see section ). these conditions are necessary to guarantee 17. 3 that the chain mixes, but they can be violated by some choices of the transition distributions ( for example, if they were deterministic ). one could imagine [UNK] training criteria for gsns. the one proposed and evaluated by ( ) is simply reconstruction log - probability on the bengio et al. 2014 visible units, just like for denoising autoencoders. this is achieved by clamping x ( 0 ) = x to the observed example and maximizing the probability of generating x at some subsequent time steps, i. e., maximizing log p ( x ( ) k = x | h ( ) k ), where h ( ) k is sampled from the chain, given x ( 0 ) = x. in order to estimate the gradient of log p ( x ( ) k = x | h ( ) k ) with respect to the other pieces of the model, bengio et al. ( ) use the reparametrization trick, introduced in section. 2014 20
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
730
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
estimate the gradient of log p ( x ( ) k = x | h ( ) k ) with respect to the other pieces of the model, bengio et al. ( ) use the reparametrization trick, introduced in section. 2014 20. 9 the walk - back training protocol ( described in section ) was used ( 20. 11. 3 ben - gio 2014 et al., ) to improve training convergence of gsns. 20. 12. 1 discriminant gsns the original formulation of gsns (, ) was meant for unsupervised bengio et al. 2014 learning and implicitly modeling p ( x ) for observed data x, but it is possible to modify the framework to optimize. p ( ) y | x for example, zhou and troyanskaya 2014 ( ) generalize gsns in this way, by only back - propagating the reconstruction log - probability over the output variables, keeping the input variables fixed. they applied this successfully to model sequences 715
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
730
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 20. deep generative models ( protein secondary structure ) and introduced a ( one - dimensional ) convolutional structure in the transition operator of the markov chain. it is important to remember that, for each step of the markov chain, one generates a new sequence for each layer, and that sequence is the input for computing other layer values ( say the one below and the one above ) at the next time step. hence the markov chain is really over the output variable ( and associated higher - level hidden layers ), and the input sequence only serves to condition that chain, with back - propagation allowing to learn how the input sequence can condition the output distribution implicitly represented by the markov chain. it is therefore a case of using the gsn in the context of structured outputs. zohrer and pernkopf 2014 ( ) introduced a hybrid model that combines a super - vised objective ( as in the above work ) and an unsupervised objective ( as in the original gsn work ), by simply adding ( with a [UNK] weight ) the supervised and unsupervised costs i. e., the reconstruction log - probabilities of y and x respectively. such a hybrid criterion had previously been introduced for rbms
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
731
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
work ), by simply adding ( with a [UNK] weight ) the supervised and unsupervised costs i. e., the reconstruction log - probabilities of y and x respectively. such a hybrid criterion had previously been introduced for rbms by larochelle and bengio 2008 ( ). they show improved classification performance using this scheme. 20. 13 other generation schemes the methods we have described so far use either mcmc sampling, ancestral sampling, or some mixture of the two to generate samples. while these are the most popular approaches to generative modeling, they are by no means the only approaches. sohl - dickstein 2015 et al. ( ) developed a [UNK] inversion training scheme for learning a generative model, based on non - equilibrium thermodynamics. the approach is based on the idea that the probability distributions we wish to sample from have structure. this structure can gradually be destroyed by a [UNK] process that incrementally changes the probability distribution to have more entropy. to form a generative model, we can run the process in reverse, by training a model that gradually restores the structure to an unstructured distribution. by iteratively applying a process that brings a distribution closer to the target one, we can
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
731
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
form a generative model, we can run the process in reverse, by training a model that gradually restores the structure to an unstructured distribution. by iteratively applying a process that brings a distribution closer to the target one, we can gradually approach that target distribution. this approach resembles mcmc methods in the sense that it involves many iterations to produce a sample. however, the model is defined to be the probability distribution produced by the final step of the chain. in this sense, there is no approximation induced by the iterative procedure. the approach introduced by ( ) sohl - dickstein et al. 2015 is also very close to the generative interpretation of the denoising autoencoder 716
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
731
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 20. deep generative models ( section ). as with the denoising autoencoder, [UNK] inversion trains a 20. 11. 1 transition operator that attempts to probabilistically undo the [UNK] of adding some noise. the [UNK] is that [UNK] inversion requres undoing only one step of the [UNK] process, rather than traveling all the way back to a clean data point. this addresses the following dilemma present with the ordinary reconstruction log - likelihood objective of denoising autoencoders : with small levels of noise the learner only sees configurations near the data points, while with large levels of noise it is asked to do an almost impossible job ( because the denoising distribution is highly complex and multi - modal ). with the [UNK] inversion objective, the learner can learn the shape of the density around the data points more precisely as well as remove spurious modes that could show up far from the data points. another approach to sample generation is the approximate bayesian com - putation ( abc ) framework (, ). in this approach, samples are rubin et al. 1984 rejected or modified in order to make the moments of selected functions of the samples match those of the desired distribution. while this idea uses the
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
732
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
##ation ( abc ) framework (, ). in this approach, samples are rubin et al. 1984 rejected or modified in order to make the moments of selected functions of the samples match those of the desired distribution. while this idea uses the moments of the samples like in moment matching, it is [UNK] from moment matching because it modifies the samples themselves, rather than training the model to automatically emit samples with the correct moments. ( ) bachman and precup 2015 showed how to use ideas from abc in the context of deep learning, by using abc to shape the mcmc trajectories of gsns. we expect that many other possible approaches to generative modeling await discovery. 20. 14 evaluating generative models researchers studying generative models often need to compare one generative model to another, usually in order to demonstrate that a newly invented generative model is better at capturing some distribution than the pre - existing models. this can be a [UNK] and subtle task. in many cases, we can not actually evaluate the log probability of the data under the model, but only an approximation. in these cases, it is important to think and communicate clearly about exactly what is being measured. for example, suppose we can evaluate a stochastic estimate of the
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
732
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
actually evaluate the log probability of the data under the model, but only an approximation. in these cases, it is important to think and communicate clearly about exactly what is being measured. for example, suppose we can evaluate a stochastic estimate of the log - likelihood for model a, and a deterministic lower bound on the log - likelihood for model b. if model a gets a higher score than model b, which is better? if we care about determining which model has a better internal representation of the distribution, we actually cannot tell, unless we have some way of determining how loose the bound for model b is. however, if we care about how well we can use the model in practice, for example to perform anomaly detection, then it is fair to 717
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
732
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 20. deep generative models say that a model is preferable based on a criterion specific to the practical task of interest, e. g., based on ranking test examples and ranking criteria such as precision and recall. another subtlety of evaluating generative models is that the evaluation metrics are often hard research problems in and of themselves. it can be very [UNK] to establish that models are being compared fairly. for example, suppose we use ais to estimate log z in order to compute log [UNK] ( x ) −log z for a new model we have just invented. a computationally economical implementation of ais may fail to find several modes of the model distribution and underestimate z, which will result in us overestimating log p ( x ). it can thus be [UNK] to tell whether a high likelihood estimate is due to a good model or a bad ais implementation. other fields of machine learning usually allow for some variation in the pre - processing of the data. for example, when comparing the accuracy of object recognition algorithms, it is usually acceptable to preprocess the input images slightly [UNK] for each algorithm based on what kind of input requirements it has. generative modeling is [UNK] because changes in preprocessing,
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
733
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
, when comparing the accuracy of object recognition algorithms, it is usually acceptable to preprocess the input images slightly [UNK] for each algorithm based on what kind of input requirements it has. generative modeling is [UNK] because changes in preprocessing, even very small and subtle ones, are completely unacceptable. any change to the input data changes the distribution to be captured and fundamentally alters the task. for example, multiplying the input by 0. 1 will artificially increase likelihood by a factor of 10. issues with preprocessing commonly arise when benchmarking generative models on the mnist dataset, one of the more popular generative modeling benchmarks. mnist consists of grayscale images. some models treat mnist images as points in a real vector space, while others treat them as binary. yet others treat the grayscale values as probabilities for a binary samples. it is essential to compare real - valued models only to other real - valued models and binary - valued models only to other binary - valued models. otherwise the likelihoods measured are not on the same space. for binary - valued models, the log - likelihood can be at most zero, while for real - valued models it can be arbitrarily
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
733
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
models only to other binary - valued models. otherwise the likelihoods measured are not on the same space. for binary - valued models, the log - likelihood can be at most zero, while for real - valued models it can be arbitrarily high, since it is the measurement of a density. among binary models, it is important to compare models using exactly the same kind of binarization. for example, we might binarize a gray pixel to 0 or 1 by thresholding at 0. 5, or by drawing a random sample whose probability of being 1 is given by the gray pixel intensity. if we use the random binarization, we might binarize the whole dataset once, or we might draw a [UNK] random example for each step of training and then draw multiple samples for evaluation. each of these three schemes yields wildly [UNK] likelihood numbers, and when comparing [UNK] models it is important that both models use the same binarization scheme for training and for evaluation. in fact, researchers who apply a single random 718
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
733
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 20. deep generative models binarization step share a file containing the results of the random binarization, so that there is no [UNK] in results based on [UNK] outcomes of the binarization step. because being able to generate realistic samples from the data distribution is one of the goals of a generative model, practitioners often evaluate generative models by visually inspecting the samples. in the best case, this is done not by the researchers themselves, but by experimental subjects who do not know the source of the samples ( denton 2015 et al., ). unfortunately, it is possible for a very poor probabilistic model to produce very good samples. a common practice to verify if the model only copies some of the training examples is illustrated in figure. 16. 1 the idea is to show for some of the generated samples their nearest neighbor in the training set, according to euclidean distance in the space of x. this test is intended to detect the case where the model overfits the training set and just reproduces training instances. it is even possible to simultaneously underfit and overfit yet still produce samples that individually look good. imagine a generative model trained on images of dogs and cats that simply learns to reproduce the training images of dogs
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
734
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
just reproduces training instances. it is even possible to simultaneously underfit and overfit yet still produce samples that individually look good. imagine a generative model trained on images of dogs and cats that simply learns to reproduce the training images of dogs. such a model has clearly overfit, because it does not produces images that were not in the training set, but it has also underfit, because it assigns no probability to the training images of cats. yet a human observer would judge each individual image of a dog to be high quality. in this simple example, it would be easy for a human observer who can inspect many samples to determine that the cats are absent. in more realistic settings, a generative model trained on data with tens of thousands of modes may ignore a small number of modes, and a human observer would not easily be able to inspect or remember enough images to detect the missing variation. since the visual quality of samples is not a reliable guide, we often also evaluate the log - likelihood that the model assigns to the test data, when this is computationally feasible. unfortunately, in some cases the likelihood seems not to measure any attribute of the model that we really care about. for example, real - valued models of mnist can obtain arbit
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
734
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
model assigns to the test data, when this is computationally feasible. unfortunately, in some cases the likelihood seems not to measure any attribute of the model that we really care about. for example, real - valued models of mnist can obtain arbitrarily high likelihood by assigning arbitrarily low variance to background pixels that never change. models and algorithms that detect these constant features can reap unlimited rewards, even though this is not a very useful thing to do. the potential to achieve a cost approaching negative infinity is present for any kind of maximum likelihood problem with real values, but it is especially problematic for generative models of mnist because so many of the output values are trivial to predict. this strongly suggests a need for developing other ways of evaluating generative models. theis 2015 et al. ( ) review many of the issues involved in evaluating generative 719
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
734
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 20. deep generative models models, including many of the ideas described above. they highlight the fact that there are many [UNK] uses of generative models and that the choice of metric must match the intended use of the model. for example, some generative models are better at assigning high probability to most realistic points while other generative models are better at rarely assigning high probability to unrealistic points. these [UNK] can result from whether a generative model is designed to minimize dkl ( pdatapmodel ) or dkl ( pmodel pdata ), as illustrated in figure. 3. 6 unfortunately, even when we restrict the use of each metric to the task it is most suited for, all of the metrics currently in use continue to have serious weaknesses. one of the most important research topics in generative modeling is therefore not just how to improve generative models, but in fact, designing new techniques to measure our progress. 20. 15 conclusion training generative models with hidden units is a powerful way to make models understand the world represented in the given training data. by learning a model pmodel ( x ) and a representation pmodel ( h x | ), a generative model can provide answers to many
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
735
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
hidden units is a powerful way to make models understand the world represented in the given training data. by learning a model pmodel ( x ) and a representation pmodel ( h x | ), a generative model can provide answers to many inference problems about the relationships between input variables in x and can provide many [UNK] ways of representing x by taking expectations of h at [UNK] layers of the hierarchy. generative models hold the promise to provide ai systems with a framework for all of the many [UNK] intuitive concepts they need to understand, and the ability to reason about these concepts in the face of uncertainty. we hope that our readers will find new ways to make these approaches more powerful and continue the journey to understanding the principles that underlie learning and intelligence. 720
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
735
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
bibliography abadi, m., agarwal, a., barham, p., brevdo, e., chen, z., citro, c., corrado, g. s., davis, a., dean, j., devin, m., ghemawat, s., goodfellow, i., harp, a., irving, g., isard, m., jia, y., jozefowicz, r., kaiser, l., kudlur, m., levenberg, j., mane, d., monga, r., moore, s., murray, d., olah, c., schuster, m., shlens, j., steiner, b., sutskever, i., talwar, k., tucker, p., vanhoucke, v., vasudevan, v., viegas, f., vinyals, o., warden, p., wattenberg, m., wicke, m., yu, y., and zheng, x. ( 2015 ). tensorflow : large - scale machine learning on heterogeneous
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
736
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
##yals, o., warden, p., wattenberg, m., wicke, m., yu, y., and zheng, x. ( 2015 ). tensorflow : large - scale machine learning on heterogeneous systems. software available from tensorflow. org., 25 214 446, ackley, d. h., hinton, g. e., and sejnowski, t. j. ( 1985 ). a learning algorithm for boltzmann machines. cognitive science,, 147 – 169., 9 570 654 alain, g. and bengio, y. ( 2013 ). what regularized auto - encoders learn from the data generating distribution. in.,,, iclr ’ 2013, arxiv : 1211. 4246 507 513 514 521 alain, g., bengio, y., yao, l., eric thibodeau - laufer, yosinski, j., and vincent, p. ( 2015 ). gsns : generative stochastic networks. arxiv : 1503. 05571., 510 713 anderson, e. ( 1935 ). the irises of the gaspe peninsula.
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
736
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
vincent, p. ( 2015 ). gsns : generative stochastic networks. arxiv : 1503. 05571., 510 713 anderson, e. ( 1935 ). the irises of the gaspe peninsula. bulletin of the american iris society,, 2 – 5. 59 21 ba, j., mnih, v., and kavukcuoglu, k. ( 2014 ). multiple object recognition with visual attention.. arxiv : 1412. 7755 691 bachman, p. and precup, d. ( 2015 ). variational generative stochastic networks with collaborative shaping. in proceedings of the 32nd international conference on machine learning, icml 2015, lille, france, 6 - 11 july 2015, pages 1964 – 1972. 717 bacon, p. - l., bengio, e., pineau, j., and precup, d. ( 2015 ). conditional computation in neural networks using a decision - theoretic approach. in 2nd multidisciplinary conference on reinforcement learning and decision making ( rldm 2015 ). 450 bagnell, j. a. and bradley, d. m. ( 2009 ). [UNK] sparse coding. in
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
736
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
theoretic approach. in 2nd multidisciplinary conference on reinforcement learning and decision making ( rldm 2015 ). 450 bagnell, j. a. and bradley, d. m. ( 2009 ). [UNK] sparse coding. in d. koller, d. schuurmans, y. bengio, and l. bottou, editors, advances in neural information processing systems 21 ( nips ’ 08 ), pages 113 – 120. 498 721
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
736
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
bibliography bahdanau, d., cho, k., and bengio, y. ( 2015 ). neural machine translation by jointly learning to align and translate. in.,,,,, iclr ’ 2015, arxiv : 1409. 0473 25 101 397 418 420 465 475 476,, bahl, l. r., brown, p., de souza, p. v., and mercer, r. l. ( 1987 ). speech recognition with continuous - parameter hidden markov models. computer, speech and language, 2, 219 – 234. 458 baldi, p. and hornik, k. ( 1989 ). neural networks and principal component analysis : learning from examples without local minima. neural networks,, 53 – 58. 2 286 baldi, p., brunak, s., frasconi, p., soda, g., and pollastri, g. ( 1999 ). exploiting the past and the future in protein secondary structure prediction., bioinformatics 15 ( 11 ), 937 – 946. 395 baldi, p., sadowski, p., and whiteson, d. ( 2014 ).
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
737
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
the past and the future in protein secondary structure prediction., bioinformatics 15 ( 11 ), 937 – 946. 395 baldi, p., sadowski, p., and whiteson, d. ( 2014 ). searching for exotic particles in high - energy physics with deep learning. nature communications,. 5 26 ballard, d. h., hinton, g. e., and sejnowski, t. j. ( 1983 ). parallel vision computation. nature. 452 barlow, h. b. ( 1989 ). unsupervised learning. neural computation,, 295 – 311. 1 147 barron, a. e. ( 1993 ). universal approximation bounds for superpositions of a sigmoidal function. ieee trans. on information theory,, 930 – 945. 39 199 bartholomew, d. j. ( 1987 ). latent variable models and factor analysis. oxford university press. 490 basilevsky, a. ( 1994 ). statistical factor analysis and related methods : theory and applications. wiley. 490 bastien, f., lamblin, p., pascanu, r., bergstra, j., goodfellow, i. j., bergeron,
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
737
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
analysis and related methods : theory and applications. wiley. 490 bastien, f., lamblin, p., pascanu, r., bergstra, j., goodfellow, i. j., bergeron, a., bouchard, n., and bengio, y. ( 2012 ). theano : new features and speed improvements. deep learning and unsupervised feature learning nips 2012 workshop.,,, 25 82 214 222 446, basu, s. and christensen, j. ( 2013 ). teaching classification boundaries to humans. in aaai ’ 2013. 329 baxter, j. ( 1995 ). learning internal representations. in proceedings of the 8th international conference on computational learning theory ( colt ’ 95 ), pages 311 – 320, santa cruz, california. acm press. 245 bayer, j. and osendorfer, c. ( 2014 ). learning stochastic recurrent networks. arxiv e - prints. 265 becker, s. and hinton, g. ( 1992 ). a self - organizing neural network that discovers surfaces in random - dot stereograms. nature,, 161 – 163. 355 541 722
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
737
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
bibliography behnke, s. ( 2001 ). learning iterative image reconstruction in the neural abstraction pyramid. int. j. computational intelligence and applications, ( 4 ), 427 – 438. 1 515 beiu, v., quintana, j. m., and avedillo, m. j. ( 2003 ). vlsi implementations of threshold logic - a comprehensive survey. neural networks, ieee transactions on, 14 ( 5 ), 1217 – 1243. 451 belkin, m. and niyogi, p. ( 2002 ). laplacian eigenmaps and spectral techniques for embedding and clustering. in t. dietterich, s. becker, and z. ghahramani, editors, advances in neural information processing systems 14 ( nips ’ 01 ), cambridge, ma. mit press. 244 belkin, m. and niyogi, p. ( 2003 ). laplacian eigenmaps for dimensionality reduction and data representation. neural computation, ( 6 ), 1373 – 1396., 15 164 518 bengio, e., bacon, p. - l., pineau, j., and precup, d. (
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
738
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
reduction and data representation. neural computation, ( 6 ), 1373 – 1396., 15 164 518 bengio, e., bacon, p. - l., pineau, j., and precup, d. ( 2015a ). conditional computation in neural networks for faster models. arxiv : 1511. 06297. 450 bengio, s. and bengio, y. ( 2000a ). taking on the curse of dimensionality in joint distributions using neural networks. ieee transactions on neural networks, special issue on data mining and knowledge discovery, ( 3 ), 550 – 557. 11 707 bengio, s., vinyals, o., jaitly, n., and shazeer, n. ( 2015b ). scheduled sampling for sequence prediction with recurrent neural networks. technical report, arxiv : 1506. 03099. 384 bengio, y. ( 1991 ). artificial neural networks and their application to sequence recognition. ph. d. thesis, mcgill university, ( computer science ), montreal, canada. 407 bengio, y. ( 2000 ). gradient - based optimization of hyperparameters. neural computation, 12 (
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
738
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
and their application to sequence recognition. ph. d. thesis, mcgill university, ( computer science ), montreal, canada. 407 bengio, y. ( 2000 ). gradient - based optimization of hyperparameters. neural computation, 12 ( 8 ), 1889 – 1900. 435 bengio, y. ( 2002 ). new distributed probabilistic language models. technical report 1215, dept. iro, universite de montreal. 467 bengio, y. ( 2009 ). learning deep architectures for ai. now publishers., 201 622 bengio, y. ( 2013 ). deep learning of representations : looking forward. in statistical language and speech processing, volume 7978 of lecture notes in computer science, pages 1 – 37. springer, also in arxiv at http : / / arxiv. org / abs / 1305. 0445. 448 bengio, y. ( 2015 ). early inference in energy - based models approximates back - propagation. technical report arxiv : 1510. 02777, universite de montreal. 656 bengio, y. and bengio, s. ( 2000b ). modeling high - dimensional discrete data with multi - layer neural networks. in,
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
738
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
report arxiv : 1510. 02777, universite de montreal. 656 bengio, y. and bengio, s. ( 2000b ). modeling high - dimensional discrete data with multi - layer neural networks. in, pages 400 – 406. mit press.,,, nips 12 705 707 708 710 bengio, y. and delalleau, o. ( 2009 ). justifying and generalizing contrastive divergence. neural computation, ( 6 ), 1601 – 1621., 21 513 611 723
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
738
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
bibliography bengio, y. and grandvalet, y. ( 2004 ). no unbiased estimator of the variance of k - fold cross - validation. in s. thrun, l. saul, and b. scholkopf, editors, advances in neural information processing systems 16 ( nips ’ 03 ), cambridge, ma. mit press, cambridge. 122 bengio, y. and lecun, y. ( 2007 ). scaling learning algorithms towards ai. in large scale kernel machines. 19 bengio, y. and monperrus, m. ( 2005 ). non - local manifold tangent learning. in l. saul, y. weiss, and l. bottou, editors, advances in neural information processing systems 17 ( nips ’ 04 ), pages 129 – 136. mit press., 160 519 bengio, y. and senecal, j. - s. ( 2003 ). quick training of probabilistic neural nets by importance sampling. in proceedings of aistats 2003. 470 bengio, y. and senecal, j. - s. ( 2008 ). adaptive importance sampling to accelerate training of a neural probabilistic language model. ieee trans. neural networks, 19 (
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
739
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
proceedings of aistats 2003. 470 bengio, y. and senecal, j. - s. ( 2008 ). adaptive importance sampling to accelerate training of a neural probabilistic language model. ieee trans. neural networks, 19 ( 4 ), 713 – 722. 470 bengio, y., de mori, r., flammia, g., and kompe, r. ( 1991 ). phonetically motivated acoustic parameters for continuous speech recognition using artificial neural networks. in proceedings of eurospeech ’ 91., 27 459 bengio, y., de mori, r., flammia, g., and kompe, r. ( 1992 ). neural network - gaussian mixture hybrid for speech recognition or density estimation. in, pages 175 – 182. nips 4 morgan kaufmann. 459 bengio, y., frasconi, p., and simard, p. ( 1993 ). the problem of learning long - term dependencies in recurrent networks. in ieee international conference on neural networks, pages 1183 – 1195, san francisco. ieee press. ( invited paper ). 403 bengio, y., simard, p.
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
739
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
of learning long - term dependencies in recurrent networks. in ieee international conference on neural networks, pages 1183 – 1195, san francisco. ieee press. ( invited paper ). 403 bengio, y., simard, p., and frasconi, p. ( 1994 ). learning long - term dependencies with gradient descent is [UNK]. ieee tr. neural nets.,,, 18 401 403 411 bengio, y., latendresse, s., and dugas, c. ( 1999 ). gradient - based learning of hyper - parameters. learning conference, snowbird. 435 bengio, y., ducharme, r., and vincent, p. ( 2001 ). a neural probabilistic language model. in t. k. leen, t. g. dietterich, and v. tresp, editors,, pages 932 – 938. mit nips ’ 2000 press.,,,,,, 18 447 464 466 472 477 482 bengio, y., ducharme, r., vincent, p., and jauvin, c. ( 2003 ). a neural probabilistic language model.,,
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
739
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
464 466 472 477 482 bengio, y., ducharme, r., vincent, p., and jauvin, c. ( 2003 ). a neural probabilistic language model.,, 1137 – 1155., jmlr 3 466 472 bengio, y., le roux, n., vincent, p., delalleau, o., and marcotte, p. ( 2006a ). convex neural networks. in, pages 123 – 130. nips ’ 2005 258 bengio, y., delalleau, o., and le roux, n. ( 2006b ). the curse of highly variable functions for local kernel machines. in. nips ’ 2005 158 724
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
739
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
bibliography bengio, y., larochelle, h., and vincent, p. ( 2006c ). non - local manifold parzen windows. in. mit press., nips ’ 2005 160 520 bengio, y., lamblin, p., popovici, d., and larochelle, h. ( 2007 ). greedy layer - wise training of deep networks. in.,,,,,, nips ’ 2006 14 19 201 323 324 528 530 bengio, y., louradour, j., collobert, r., and weston, j. ( 2009 ). curriculum learning. in icml ’ 09. 328 bengio, y., mesnil, g., dauphin, y., and rifai, s. ( 2013a ). better mixing via deep representations. in. icml ’ 2013 604 bengio, y., leonard, n., and courville, a. ( 2013b ). estimating or propagating gradients through stochastic neurons for conditional computation. arxiv : 1308. 3432.,, 448 450 689 691, bengio, y., yao
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
740
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
. ( 2013b ). estimating or propagating gradients through stochastic neurons for conditional computation. arxiv : 1308. 3432.,, 448 450 689 691, bengio, y., yao, l., alain, g., and vincent, p. ( 2013c ). generalized denoising auto - encoders as generative models. in.,, nips ’ 2013 507 711 714 bengio, y., courville, a., and vincent, p. ( 2013d ). representation learning : a review and new perspectives. ieee trans. pattern analysis and machine intelligence ( pami ), 35 ( 8 ), 1798 – 1828. 555 bengio, y., thibodeau - laufer, e., alain, g., and yosinski, j. ( 2014 ). deep generative stochastic networks trainable by backprop. in.,,,, icml ’ 2014 711 712 713 714 715 bennett, c. ( 1976 ). [UNK] estimation of free energy [UNK] from monte carlo data. journal of computational physics, ( 2 ), 245 – 268. 22 628 bennett,
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
740
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
##ml ’ 2014 711 712 713 714 715 bennett, c. ( 1976 ). [UNK] estimation of free energy [UNK] from monte carlo data. journal of computational physics, ( 2 ), 245 – 268. 22 628 bennett, j. and lanning, s. ( 2007 ). the netflix prize. 479 berger, a. l., della pietra, v. j., and della pietra, s. a. ( 1996 ). a maximum entropy approach to natural language processing.,, 39 – 71. computational linguistics 22 473 berglund, m. and raiko, t. ( 2013 ). stochastic gradient estimate variance in contrastive divergence and persistent contrastive divergence.,. corr abs / 1312. 6002 614 bergstra, j. ( 2011 ). incorporating complex cells into neural networks for pattern classification. ph. d. thesis, universite de montreal. 255 bergstra, j. and bengio, y. ( 2009 ). slow, decorrelated features for pretraining complex cell - like networks. in. nips ’ 2009 494 bergstra, j. and bengio, y. ( 2012 ). random search for hyper
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
740
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
, y. ( 2009 ). slow, decorrelated features for pretraining complex cell - like networks. in. nips ’ 2009 494 bergstra, j. and bengio, y. ( 2012 ). random search for hyper - parameter optimization. j. machine learning res.,, 281 – 305.,, 13 433 434 435 bergstra, j., breuleux, o., bastien, f., lamblin, p., pascanu, r., desjardins, g., turian, j., warde - farley, d., and bengio, y. ( 2010 ). theano : a cpu and gpu math expression compiler. in proc. scipy.,,,, 25 82 214 222 446 725
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
740
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
bibliography bergstra, j., bardenet, r., bengio, y., and kegl, b. ( 2011 ). algorithms for hyper - parameter optimization. in. nips ’ 2011 436 berkes, p. and wiskott, l. ( 2005 ). slow feature analysis yields a rich repertoire of complex cell properties., ( 6 ), 579 – 602. journal of vision 5 495 bertsekas, d. p. and tsitsiklis, j. ( 1996 ). neuro - dynamic programming. athena scientific. 106 besag, j. ( 1975 ). statistical analysis of non - lattice data., the statistician 24 ( 3 ), 179 – 195. 615 bishop, c. m. ( 1994 ). mixture density networks. 189 bishop, c. m. ( 1995a ). regularization and complexity control in feed - forward networks. in proceedings international conference on artificial neural networks icann ’ 95, volume 1, page 141 – 148., 242 250 bishop, c. m. ( 1995b ). training with noise is equivalent to tikhonov regularization. neural computation, ( 1 ), 108 – 116
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
741
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
icann ’ 95, volume 1, page 141 – 148., 242 250 bishop, c. m. ( 1995b ). training with noise is equivalent to tikhonov regularization. neural computation, ( 1 ), 108 – 116. 7 242 bishop, c. m. ( 2006 ). pattern recognition and machine learning. springer., 98 146 blum, a. l. and rivest, r. l. ( 1992 ). training a 3 - node neural network is np - complete. 293 blumer, a., ehrenfeucht, a., haussler, d., and warmuth, m. k. ( 1989 ). learnability and the vapnik – chervonenkis dimension., ( 4 ), 929 – – 865. journal of the acm 36 114 bonnet, g. ( 1964 ). transformations des signaux aleatoires a travers les systemes non lineaires sans memoire. annales des telecommunications, ( 9 – 10 ), 203 – 220. 19 689 bordes, a., weston, j., collobert, r., and bengio, y. ( 2011 ). learning structured embeddings of
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
741
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
telecommunications, ( 9 – 10 ), 203 – 220. 19 689 bordes, a., weston, j., collobert, r., and bengio, y. ( 2011 ). learning structured embeddings of knowledge bases. in. aaai 2011 484 bordes, a., glorot, x., weston, j., and bengio, y. ( 2012 ). joint learning of words and meaning representations for open - text semantic parsing. aistats ’ 2012.,, 401 484 485 bordes, a., glorot, x., weston, j., and bengio, y. ( 2013a ). a semantic matching energy function for learning with multi - relational data. machine learning : special issue on learning semantics. 483 bordes, a., usunier, n., garcia - duran, a., weston, j., and yakhnenko, o. ( 2013b ). translating embeddings for modeling multi - relational data. in c. burges, l. bottou, m. welling, z. ghahramani, and k. weinberger, editors, advances in neural information processing
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
741
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
. translating embeddings for modeling multi - relational data. in c. burges, l. bottou, m. welling, z. ghahramani, and k. weinberger, editors, advances in neural information processing systems 26, pages 2787 – 2795. curran associates, inc. 484 bornschein, j. and bengio, y. ( 2015 ). reweighted wake - sleep. in iclr ’ 2015, arxiv : 1406. 2751. 693 726
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
741
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
bibliography bornschein, j., shabanian, s., fischer, a., and bengio, y. ( 2015 ). training bidirectional helmholtz machines. technical report, arxiv : 1506. 03877. 693 boser, b. e., guyon, i. m., and vapnik, v. n. ( 1992 ). a training algorithm for opti - mal margin classifiers. in colt ’ 92 : proceedings of the fifth annual workshop on computational learning theory, pages 144 – 152, new york, ny, usa. acm., 18 141 bottou, l. ( 1998 ). online algorithms and stochastic approximations. in d. saad, editor, online learning in neural networks. cambridge university press, cambridge, uk. 296 bottou, l. ( 2011 ). from machine learning to machine reasoning. technical report, arxiv. 1102. 1808. 401 bottou, l. ( 2015 ). multilayer neural networks. deep learning summer school. 440 bottou, l. and bousquet, o. ( 2008 ). the [UNK] of large scale learning. in. nips ’ 2008 282
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
742
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
bottou, l. ( 2015 ). multilayer neural networks. deep learning summer school. 440 bottou, l. and bousquet, o. ( 2008 ). the [UNK] of large scale learning. in. nips ’ 2008 282 295, boulanger - lewandowski, n., bengio, y., and vincent, p. ( 2012 ). modeling temporal dependencies in high - dimensional sequences : application to polyphonic music generation and transcription. in., icml ’ 12 685 686 boureau, y., ponce, j., and lecun, y. ( 2010 ). a theoretical analysis of feature pooling in vision algorithms. in proc. international conference on machine learning ( icml ’ 10 ). 345 boureau, y., le roux, n., bach, f., ponce, j., and lecun, y. ( 2011 ). ask the locals : multi - way local pooling for image recognition. in proc. international conference on computer vision ( iccv ’ 11 ). ieee. 345 bourlard, h. and kamp, y. ( 1988 ). auto - association by multilayer perceptrons and singular value
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
742
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
recognition. in proc. international conference on computer vision ( iccv ’ 11 ). ieee. 345 bourlard, h. and kamp, y. ( 1988 ). auto - association by multilayer perceptrons and singular value decomposition. biological cybernetics,, 291 – 294. 59 502 bourlard, h. and wellekens, c. ( 1989 ). speech pattern discrimination and multi - layered perceptrons. computer speech and language,, 1 – 19. 3 459 boyd, s. and vandenberghe, l. ( 2004 ).. cambridge university convex optimization press, new york, ny, usa. 93 brady, m. l., raghavan, r., and slawny, j. ( 1989 ). back - propagation fails to separate where perceptrons succeed. ieee transactions on circuits and systems, 36, 665 – 674. 284 brakel, p., stroobandt, d., and schrauwen, b. ( 2013 ). training energy - based models for time - series imputation. journal of machine learning research, 14, 2771 – 2797., 674 698 brand, m. ( 2003 )
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
742
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
and schrauwen, b. ( 2013 ). training energy - based models for time - series imputation. journal of machine learning research, 14, 2771 – 2797., 674 698 brand, m. ( 2003 ). charting a manifold. in, pages 961 – 968. mit press., nips ’ 2002 164 518 727
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
742
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0