text
stringlengths 35
1.54k
| source
stringclasses 1
value | page
int64 1
800
| book
stringclasses 1
value | chunk_index
int64 0
0
|
|---|---|---|---|---|
chapter 7. regularization for deep learning convolutional neural networks by far the most popular and extensive use of parameter sharing occurs in convolutional neural networks ( cnns ) applied to computer vision. natural images have many statistical properties that are invariant to translation. for example, a photo of a cat remains a photo of a cat if it is translated one pixel to the right. cnns take this property into account by sharing parameters across multiple image locations. the same feature ( a hidden unit with the same weights ) is computed over [UNK] locations in the input. this means that we can find a cat with the same cat detector whether the cat appears at column i or column i + 1 in the image. parameter sharing has allowed cnns to dramatically lower the number of unique model parameters and to significantly increase network sizes without requiring a corresponding increase in training data. it remains one of the best examples of how to [UNK] incorporate domain knowledge into the network architecture. cnns will be discussed in more detail in chapter. 9 7. 10 sparse representations weight decay acts by placing a penalty directly on the model parameters. another strategy is to place a penalty on the activations of the units in a neural network, encouraging their activations to be sparse.
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 269
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
detail in chapter. 9 7. 10 sparse representations weight decay acts by placing a penalty directly on the model parameters. another strategy is to place a penalty on the activations of the units in a neural network, encouraging their activations to be sparse. this indirectly imposes a complicated penalty on the model parameters. we have already discussed ( in section ) how 7. 1. 2 l1 penalization induces a sparse parametrization — meaning that many of the parameters become zero ( or close to zero ). representational sparsity, on the other hand, describes a representation where many of the elements of the representation are zero ( or close to zero ). a simplified view of this distinction can be illustrated in the context of linear regression : 18 5 15 −9 −3 = 4 0 0 2 0 0 − 0 0 1 0 3 0 − 0 5 0 0 0 0 1 0 0 1 0 4 − − 1 0 0 0 5 0 − 2 3 −2 −5 1 4 y ∈rm a ∈r m n × x ∈r n ( 7. 46 ) 254
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 269
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 7. regularization for deep learning −14 1 19 2 23 = 3 1 2 5 4 1 − − 4 2 3 1 1 3 − − − − − 1 5 4 2 3 2 3 1 2 3 0 3 − − − − − − 5 4 2 2 5 1 0 2 0 0 −3 0 y ∈rm b ∈rm n × h ∈rn ( 7. 47 ) in the first expression, we have an example of a sparsely parametrized linear regression model. in the second, we have linear regression with a sparse representa - tion h of the data x. that is, h is a function of x that, in some sense, represents the information present in, but does so with a sparse vector. x representational regularization is accomplished by the same sorts of mechanisms that we have used in parameter regularization. norm penalty regularization of representations is performed by adding to the loss function j a norm penalty on the representation. this penalty is denoted ω ( ) h. as before, we denote the regularized loss function by [UNK] : [UNK], j, α ( ; θ x y ) = ( ; θ x y ) + ω ( ) h ( 7. 48 ) where α ∈ [ 0, ∞ ) weights the
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 270
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
before, we denote the regularized loss function by [UNK] : [UNK], j, α ( ; θ x y ) = ( ; θ x y ) + ω ( ) h ( 7. 48 ) where α ∈ [ 0, ∞ ) weights the relative contribution of the norm penalty term, with larger values of corresponding to more regularization. α just as an l1 penalty on the parameters induces parameter sparsity, an l1 penalty on the elements of the representation induces representational sparsity : ω ( h ) = | | | | h 1 = i | hi |. of course, the l1 penalty is only one choice of penalty that can result in a sparse representation. others include the penalty derived from a student - t prior on the representation (, ;, ) olshausen and field 1996 bergstra 2011 and kl divergence penalties (, ) that are especially larochelle and bengio 2008 useful for representations with elements constrained to lie on the unit interval. lee 2008 goodfellow 2009 et al. ( ) and et al. ( ) both provide examples of strategies based on regularizing the average activation across several examples, 1 m i h ( ) i, to be near some target value, such as a vector with
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 270
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##fellow 2009 et al. ( ) and et al. ( ) both provide examples of strategies based on regularizing the average activation across several examples, 1 m i h ( ) i, to be near some target value, such as a vector with. 01 for each entry. other approaches obtain representational sparsity with a hard constraint on the activation values. for example, orthogonal matching pursuit ( pati et al., 1993 ) encodes an input x with the representation h that solves the constrained optimization problem arg min h h, 0 < k − x w h 2, ( 7. 49 ) where h 0 is the number of non - zero entries of h. this problem can be solved [UNK] when w is constrained to be orthogonal. this method is often called 255
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 270
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 7. regularization for deep learning omp - k with the value of k specified to indicate the number of non - zero features allowed. ( ) demonstrated that omp - can be a very [UNK] coates and ng 2011 1 feature extractor for deep architectures. essentially any model that has hidden units can be made sparse. throughout this book, we will see many examples of sparsity regularization used in a variety of contexts. 7. 11 bagging and other ensemble methods bagging ( short for bootstrap aggregating ) is a technique for reducing gen - eralization error by combining several models (, ). the idea is to breiman 1994 train several [UNK] models separately, then have all of the models vote on the output for test examples. this is an example of a general strategy in machine learning called model averaging. techniques employing this strategy are known as ensemble methods. the reason that model averaging works is that [UNK] models will usually not make all the same errors on the test set. consider for example a set of k regression models. suppose that each model makes an error i on each example, with the errors drawn from a zero - mean multivariate normal distribution with variances e [ 2 i ] = v and covar
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 271
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
. consider for example a set of k regression models. suppose that each model makes an error i on each example, with the errors drawn from a zero - mean multivariate normal distribution with variances e [ 2 i ] = v and covariances e [ ij ] = c. then the error made by the average prediction of all the ensemble models is 1 k i i. the expected squared error of the ensemble predictor is e 1 k i i 2 = 1 k2 e i 2 i + j i = ij ( 7. 50 ) = 1 kv + k −1 k c. ( 7. 51 ) in the case where the errors are perfectly correlated and c = v, the mean squared error reduces to v, so the model averaging does not help at all. in the case where the errors are perfectly uncorrelated and c = 0, the expected squared error of the ensemble is only 1 kv. this means that the expected squared error of the ensemble decreases linearly with the ensemble size. in other words, on average, the ensemble will perform at least as well as any of its members, and if the members make independent errors, the ensemble will perform significantly better than its members. [UNK] ensemble methods construct the
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 271
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
size. in other words, on average, the ensemble will perform at least as well as any of its members, and if the members make independent errors, the ensemble will perform significantly better than its members. [UNK] ensemble methods construct the ensemble of models in [UNK] ways. for example, each member of the ensemble could be formed by training a completely 256
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 271
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 7. regularization for deep learning 8 8 first ensemble member second ensemble member original dataset first resampled dataset second resampled dataset figure 7. 5 : a cartoon depiction of how bagging works. suppose we train an 8 detector on the dataset depicted above, containing an 8, a 6 and a 9. suppose we make two [UNK] resampled datasets. the bagging training procedure is to construct each of these datasets by sampling with replacement. the first dataset omits the 9 and repeats the 8. on this dataset, the detector learns that a loop on top of the digit corresponds to an 8. on the second dataset, we repeat the 9 and omit the 6. in this case, the detector learns that a loop on the bottom of the digit corresponds to an 8. each of these individual classification rules is brittle, but if we average their output then the detector is robust, achieving maximal confidence only when both loops of the 8 are present. [UNK] kind of model using a [UNK] algorithm or objective function. bagging is a method that allows the same kind of model, training algorithm and objective function to be reused several times. specifically, bagging involves
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 272
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
the 8 are present. [UNK] kind of model using a [UNK] algorithm or objective function. bagging is a method that allows the same kind of model, training algorithm and objective function to be reused several times. specifically, bagging involves constructing k [UNK] datasets. each dataset has the same number of examples as the original dataset, but each dataset is constructed by sampling with replacement from the original dataset. this means that, with high probability, each dataset is missing some of the examples from the original dataset and also contains several duplicate examples ( on average around 2 / 3 of the examples from the original dataset are found in the resulting training set, if it has the same size as the original ). model i is then trained on dataset i. the [UNK] between which examples are included in each dataset result in [UNK] between the trained models. see figure for an example. 7. 5 neural networks reach a wide enough variety of solution points that they can often benefit from model averaging even if all of the models are trained on the same dataset. [UNK] in random initialization, random selection of minibatches, [UNK] in hyperparameters, or [UNK] outcomes of non - deterministic imple
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 272
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##fit from model averaging even if all of the models are trained on the same dataset. [UNK] in random initialization, random selection of minibatches, [UNK] in hyperparameters, or [UNK] outcomes of non - deterministic imple - mentations of neural networks are often enough to cause [UNK] members of the 257
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 272
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 7. regularization for deep learning ensemble to make partially independent errors. model averaging is an extremely powerful and reliable method for reducing generalization error. its use is usually discouraged when benchmarking algorithms for scientific papers, because any machine learning algorithm can benefit substan - tially from model averaging at the price of increased computation and memory. for this reason, benchmark comparisons are usually made using a single model. machine learning contests are usually won by methods using model averag - ing over dozens of models. a recent prominent example is the netflix grand prize ( koren 2009, ). not all techniques for constructing ensembles are designed to make the ensemble more regularized than the individual models. for example, a technique called boosting ( freund and schapire 1996b a,, ) constructs an ensemble with higher capacity than the individual models. boosting has been applied to build ensembles of neural networks ( schwenk and bengio 1998, ) by incrementally adding neural networks to the ensemble. boosting has also been applied interpreting an individual neural network as an ensemble (, ), incrementally adding hidden bengio et al. 2006a units to the neural network. 7. 12 drop
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 273
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##entally adding neural networks to the ensemble. boosting has also been applied interpreting an individual neural network as an ensemble (, ), incrementally adding hidden bengio et al. 2006a units to the neural network. 7. 12 dropout dropout ( srivastava 2014 et al., ) provides a computationally inexpensive but powerful method of regularizing a broad family of models. to a first approximation, dropout can be thought of as a method of making bagging practical for ensembles of very many large neural networks. bagging involves training multiple models, and evaluating multiple models on each test example. this seems impractical when each model is a large neural network, since training and evaluating such networks is costly in terms of runtime and memory. it is common to use ensembles of five to ten neural networks — ( ) used six to win the ilsvrc — szegedy et al. 2014a but more than this rapidly becomes unwieldy. dropout provides an inexpensive approximation to training and evaluating a bagged ensemble of exponentially many neural networks. specifically, dropout trains the ensemble consisting of all sub - networks that can be formed by removing non - output units from an underlying base network,
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 273
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
inexpensive approximation to training and evaluating a bagged ensemble of exponentially many neural networks. specifically, dropout trains the ensemble consisting of all sub - networks that can be formed by removing non - output units from an underlying base network, as illustrated in figure. in most modern neural networks, based on a series of 7. 6 [UNK] transformations and nonlinearities, we can [UNK] remove a unit from a network by multiplying its output value by zero. this procedure requires some slight modification for models such as radial basis function networks, which take 258
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 273
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 7. regularization for deep learning the [UNK] between the unit ’ s state and some reference value. here, we present the dropout algorithm in terms of multiplication by zero for simplicity, but it can be trivially modified to work with other operations that remove a unit from the network. recall that to learn with bagging, we define k [UNK] models, construct k [UNK] datasets by sampling from the training set with replacement, and then train model i on dataset i. dropout aims to approximate this process, but with an exponentially large number of neural networks. specifically, to train with dropout, we use a minibatch - based learning algorithm that makes small steps, such as stochastic gradient descent. each time we load an example into a minibatch, we randomly sample a [UNK] binary mask to apply to all of the input and hidden units in the network. the mask for each unit is sampled independently from all of the others. the probability of sampling a mask value of one ( causing a unit to be included ) is a hyperparameter fixed before training begins. it is not a function of the current value of the model parameters or the input example. typically, an input unit is included
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 274
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
sampling a mask value of one ( causing a unit to be included ) is a hyperparameter fixed before training begins. it is not a function of the current value of the model parameters or the input example. typically, an input unit is included with probability 0. 8 and a hidden unit is included with probability 0. 5. we then run forward propagation, back - propagation, and the learning update as usual. figure illustrates how to run forward propagation 7. 7 with dropout. more formally, suppose that a mask vector µ specifies which units to include, and j ( θ µ, ) defines the cost of the model defined by parameters θ and mask µ. then dropout training consists in minimizing eµj ( θ µ, ). the expectation contains exponentially many terms but we can obtain an unbiased estimate of its gradient by sampling values of. µ dropout training is not quite the same as bagging training. in the case of bagging, the models are all independent. in the case of dropout, the models share parameters, with each model inheriting a [UNK] subset of parameters from the parent neural network. this parameter sharing makes it possible to represent an exponential number of models with a tractable amount
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 274
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
are all independent. in the case of dropout, the models share parameters, with each model inheriting a [UNK] subset of parameters from the parent neural network. this parameter sharing makes it possible to represent an exponential number of models with a tractable amount of memory. in the case of bagging, each model is trained to convergence on its respective training set. in the case of dropout, typically most models are not explicitly trained at all — usually, the model is large enough that it would be infeasible to sample all possible sub - networks within the lifetime of the universe. instead, a tiny fraction of the possible sub - networks are each trained for a single step, and the parameter sharing causes the remaining sub - networks to arrive at good settings of the parameters. these are the only [UNK]. beyond these, dropout follows the bagging algorithm. for example, the training set encountered by each sub - network is indeed a subset of the original training set sampled with replacement. 259
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 274
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 7. regularization for deep learning y h1 h1 h2 h2 x1 x1 x2 x2 y h1 h1 h2 h2 x1 x1 x2 x2 y h1 h1 h2 h2 x2 x2 y h1 h1 h2 h2 x1 x1 y h2 h2 x1 x1 x2 x2 y h1 h1 x1 x1 x2 x2 y h1 h1 h2 h2 y x1 x1 x2 x2 y h2 h2 x2 x2 y h1 h1 x1 x1 y h1 h1 x2 x2 y h2 h2 x1 x1 y x1 x1 y x2 x2 y h2 h2 y h1 h1 y base network ensemble of subnetworks figure 7. 6 : dropout trains an ensemble consisting of all sub - networks that can be constructed by removing non - output units from an underlying base network. here, we begin with a base network with two visible units and two hidden units. there are sixteen possible subsets of these four units. we show all sixteen subnetworks that may be formed by dropping out [UNK] subsets of units from
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 275
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
network. here, we begin with a base network with two visible units and two hidden units. there are sixteen possible subsets of these four units. we show all sixteen subnetworks that may be formed by dropping out [UNK] subsets of units from the original network. in this small example, a large proportion of the resulting networks have no input units or no path connecting the input to the output. this problem becomes insignificant for networks with wider layers, where the probability of dropping all possible paths from inputs to outputs becomes smaller. 260
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 275
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 7. regularization for deep learning [UNK] µx1 µx1 x1 x1 [UNK] x2 x2 µx2 µx2 h1 h1 h2 h2 µh1 µh1 µh2 µh2 [UNK] [UNK] h1 [UNK] y y h1 h1 h2 h2 x1 x1 x2 x2 figure 7. 7 : an example of forward propagation through a feedforward network using dropout. ( top ) in this example, we use a feedforward network with two input units, one hidden layer with two hidden units, and one output unit. to perform forward ( bottom ) propagation with dropout, we randomly sample a vector µ with one entry for each input or hidden unit in the network. the entries of µ are binary and are sampled independently from each other. the probability of each entry being is a hyperparameter, usually 1 0. 5 for the hidden layers and 0. 8 for the input. each unit in the network is multiplied by the corresponding mask, and then forward propagation continues through the rest of the network as usual. this is equivalent to randomly selecting one of the sub - networks from figure and running forward propagation through it. 7. 6 261
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 276
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 7. regularization for deep learning to make a prediction, a bagged ensemble must accumulate votes from all of its members. we refer to this process as inference in this context. so far, our description of bagging and dropout has not required that the model be explicitly probabilistic. now, we assume that the model ’ s role is to output a probability distribution. in the case of bagging, each model iproduces a probability distribution p ( ) i ( y | x ). the prediction of the ensemble is given by the arithmetic mean of all of these distributions, 1 k k i = 1 p ( ) i ( ) y | x. ( 7. 52 ) in the case of dropout, each sub - model defined by mask vector µ defines a prob - ability distribution p ( y, | x µ ). the arithmetic mean over all masks is given by µ p p y, ( ) µ ( | x µ ) ( 7. 53 ) where p ( µ ) is the probability distribution that was used to sample µ at training time. because this sum includes an exponential number of terms, it is intractable to evaluate except in cases where the structure of the model permits some form of simpli
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 277
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
p ( µ ) is the probability distribution that was used to sample µ at training time. because this sum includes an exponential number of terms, it is intractable to evaluate except in cases where the structure of the model permits some form of simplification. so far, deep neural nets are not known to permit any tractable simplification. instead, we can approximate the inference with sampling, by averaging together the output from many masks. even 10 - 20 masks are often [UNK] to obtain good performance. however, there is an even better approach, that allows us to obtain a good approximation to the predictions of the entire ensemble, at the cost of only one forward propagation. to do so, we change to using the geometric mean rather than the arithmetic mean of the ensemble members ’ predicted distributions. warde - farley 2014 et al. ( ) present arguments and empirical evidence that the geometric mean performs comparably to the arithmetic mean in this context. the geometric mean of multiple probability distributions is not guaranteed to be a probability distribution. to guarantee that the result is a probability distribution, we impose the requirement that none of the sub - models assigns probability 0 to any event, and we renormalize the resulting distribution. the unnormalized probability distribution
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 277
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
be a probability distribution. to guarantee that the result is a probability distribution, we impose the requirement that none of the sub - models assigns probability 0 to any event, and we renormalize the resulting distribution. the unnormalized probability distribution defined directly by the geometric mean is given by [UNK] ( ) = y | x 2d µ p y, ( | x µ ) ( 7. 54 ) where d is the number of units that may be dropped. here we use a uniform distribution over µ to simplify the presentation, but non - uniform distributions are 262
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 277
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 7. regularization for deep learning also possible. to make predictions we must re - normalize the ensemble : pensemble ( ) = y | x [UNK] ( ) y | x y [UNK] ( y | x ). ( 7. 55 ) a key insight (, ) involved in dropout is that we can approxi - hinton et al. 2012c mate pensemble by evaluating p ( y | x ) in one model : the model with all units, but with the weights going out of unit i multiplied by the probability of including unit i. the motivation for this modification is to capture the right expected value of the output from that unit. we call this approach the weight scaling inference rule. there is not yet any theoretical argument for the accuracy of this approximate inference rule in deep nonlinear networks, but empirically it performs very well. because we usually use an inclusion probability of 1 2, the weight scaling rule usually amounts to dividing the weights by at the end of training, and then using 2 the model as usual. another way to achieve the same result is to multiply the states of the units by during training. either way, the goal is to make sure that 2 the expected total input to a unit at test time is roughly the
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 278
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
2 the model as usual. another way to achieve the same result is to multiply the states of the units by during training. either way, the goal is to make sure that 2 the expected total input to a unit at test time is roughly the same as the expected total input to that unit at train time, even though half the units at train time are missing on average. for many classes of models that do not have nonlinear hidden units, the weight scaling inference rule is exact. for a simple example, consider a softmax regression classifier with input variables represented by the vector : n v p y ( = y | v ) = softmax w v + b y. ( 7. 56 ) we can index into the family of sub - models by element - wise multiplication of the input with a binary vector : d p y ( = y | v ; ) = d softmax w ( ) + d v b y. ( 7. 57 ) the ensemble predictor is defined by re - normalizing the geometric mean over all ensemble members ’ predictions : pensemble ( = ) = y y | v [UNK] ( = ) y y | v [UNK] ( = y y | v ) ( 7. 58 ) where [UNK] ( = ) = y
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 278
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##izing the geometric mean over all ensemble members ’ predictions : pensemble ( = ) = y y | v [UNK] ( = ) y y | v [UNK] ( = y y | v ) ( 7. 58 ) where [UNK] ( = ) = y y | v 2n d∈ { } 0 1, n p y. ( = y | v ; ) d ( 7. 59 ) 263
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 278
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 7. regularization for deep learning to see that the weight scaling rule is exact, we can simplify [UNK] : [UNK] ( = ) = y y | v 2n d∈ { } 0 1, n p y ( = y | v ; ) d ( 7. 60 ) = 2n d∈ { } 0 1, n softmax ( w ( ) + ) d v b y ( 7. 61 ) = 2n d∈ { } 0 1, n exp w y, : ( ) + d v by yexp w y, : ( ) + d v by ( 7. 62 ) = 2n d∈ { } 0 1, n exp w y, : ( ) + d v b y 2n d∈ { } 0 1, n yexp w y, : ( ) + d v by ( 7. 63 ) because [UNK] will be normalized, we can safely ignore multiplication by factors that are constant with respect to : y [UNK] ( = ) y y | v [UNK] 2n d∈ { } 0 1, n exp w y, : ( ) + d v by ( 7. 64 ) = exp 1 2 n d∈ { } 0 1, n w y, : (
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 279
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
y | v [UNK] 2n d∈ { } 0 1, n exp w y, : ( ) + d v by ( 7. 64 ) = exp 1 2 n d∈ { } 0 1, n w y, : ( ) + d v by ( 7. 65 ) = exp 1 2w y, : v + by. ( 7. 66 ) substituting this back into equation we obtain a softmax classifier with weights 7. 58 1 2w. the weight scaling rule is also exact in other settings, including regression networks with conditionally normal outputs, and deep networks that have hidden layers without nonlinearities. however, the weight scaling rule is only an approxi - mation for deep models that have nonlinearities. though the approximation has not been theoretically characterized, it often works well, empirically. goodfellow et al. ( ) found experimentally that the weight scaling approximation can work 2013a better ( in terms of classification accuracy ) than monte carlo approximations to the ensemble predictor. this held true even when the monte carlo approximation was allowed to sample up to 1, 000 sub - networks. ( ) found gal and ghahramani 2015 that some models obtain better classification accuracy
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 279
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
approximations to the ensemble predictor. this held true even when the monte carlo approximation was allowed to sample up to 1, 000 sub - networks. ( ) found gal and ghahramani 2015 that some models obtain better classification accuracy using twenty samples and 264
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 279
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 7. regularization for deep learning the monte carlo approximation. it appears that the optimal choice of inference approximation is problem - dependent. srivastava 2014 et al. ( ) showed that dropout is more [UNK] than other standard computationally inexpensive regularizers, such as weight decay, filter norm constraints and sparse activity regularization. dropout may also be combined with other forms of regularization to yield a further improvement. one advantage of dropout is that it is very computationally cheap. using dropout during training requires only o ( n ) computation per example per update, to generate n random binary numbers and multiply them by the state. depending on the implementation, it may also require o ( n ) memory to store these binary numbers until the back - propagation stage. running inference in the trained model has the same cost per - example as if dropout were not used, though we must pay the cost of dividing the weights by 2 once before beginning to run inference on examples. another significant advantage of dropout is that it does not significantly limit the type of model or training procedure that can be used. it works well with nearly any model that uses a distributed representation and can be trained with stochastic gradient descent.
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 280
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
advantage of dropout is that it does not significantly limit the type of model or training procedure that can be used. it works well with nearly any model that uses a distributed representation and can be trained with stochastic gradient descent. this includes feedforward neural networks, probabilistic models such as restricted boltzmann machines ( srivastava 2014 et al., ), and recurrent neural networks ( bayer and osendorfer 2014 pascanu 2014a, ; et al., ). many other regularization strategies of comparable power impose more severe restrictions on the architecture of the model. though the cost per - step of applying dropout to a specific model is negligible, the cost of using dropout in a complete system can be significant. because dropout is a regularization technique, it reduces the [UNK] capacity of a model. to [UNK] this [UNK], we must increase the size of the model. typically the optimal validation set error is much lower when using dropout, but this comes at the cost of a much larger model and many more iterations of the training algorithm. for very large datasets, regularization confers little reduction in generalization error. in these cases,
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 280
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
much lower when using dropout, but this comes at the cost of a much larger model and many more iterations of the training algorithm. for very large datasets, regularization confers little reduction in generalization error. in these cases, the computational cost of using dropout and larger models may outweigh the benefit of regularization. when extremely few labeled training examples are available, dropout is less [UNK]. bayesian neural networks (, ) outperform dropout on the neal 1996 alternative splicing dataset (, ) where fewer than 5, 000 examples xiong et al. 2011 are available ( srivastava 2014 et al., ). when additional unlabeled data is available, unsupervised feature learning can gain an advantage over dropout. wager 2013 et al. ( ) showed that, when applied to linear regression, dropout is equivalent to l2 weight decay, with a [UNK] weight decay [UNK] for 265
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 280
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 7. regularization for deep learning each input feature. the magnitude of each feature ’ s weight decay [UNK] is determined by its variance. similar results hold for other linear models. for deep models, dropout is not equivalent to weight decay. the stochasticity used while training with dropout is not necessary for the approach ’ s success. it is just a means of approximating the sum over all sub - models. wang and manning 2013 ( ) derived analytical approximations to this marginalization. their approximation, known as fast dropout resulted in faster convergence time due to the reduced stochasticity in the computation of the gradient. this method can also be applied at test time, as a more principled ( but also more computationally expensive ) approximation to the average over all sub - networks than the weight scaling approximation. fast dropout has been used to nearly match the performance of standard dropout on small neural network problems, but has not yet yielded a significant improvement or been applied to a large problem. just as stochasticity is not necessary to achieve the regularizing [UNK] of dropout, it is also not [UNK]. to demonstrate this, warde - farley 2014 et al. ( ) designed control experiments using a method called drop
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 281
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
problem. just as stochasticity is not necessary to achieve the regularizing [UNK] of dropout, it is also not [UNK]. to demonstrate this, warde - farley 2014 et al. ( ) designed control experiments using a method called dropout boosting that they designed to use exactly the same mask noise as traditional dropout but lack its regularizing [UNK]. dropout boosting trains the entire ensemble to jointly maximize the log - likelihood on the training set. in the same sense that traditional dropout is analogous to bagging, this approach is analogous to boosting. as intended, experiments with dropout boosting show almost no regularization [UNK] compared to training the entire network as a single model. this demonstrates that the interpretation of dropout as bagging has value beyond the interpretation of dropout as robustness to noise. the regularization [UNK] of the bagged ensemble is only achieved when the stochastically sampled ensemble members are trained to perform well independently of each other. dropout has inspired other stochastic approaches to training exponentially large ensembles of models that share weights. dropconnect is a special case of dropout where each product between a single scalar weight and a single hidden unit state is considered a unit that can be dropped ( wan 2013
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 281
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
approaches to training exponentially large ensembles of models that share weights. dropconnect is a special case of dropout where each product between a single scalar weight and a single hidden unit state is considered a unit that can be dropped ( wan 2013 et al., ). stochastic pooling is a form of randomized pooling ( see section ) for building ensembles 9. 3 of convolutional networks with each convolutional network attending to [UNK] spatial locations of each feature map. so far, dropout remains the most widely used implicit ensemble method. one of the key insights of dropout is that training a network with stochastic behavior and making predictions by averaging over multiple stochastic decisions implements a form of bagging with parameter sharing. earlier, we described 266
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 281
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 7. regularization for deep learning dropout as bagging an ensemble of models formed by including or excluding units. however, there is no need for this model averaging strategy to be based on inclusion and exclusion. in principle, any kind of random modification is admissible. in practice, we must choose modification families that neural networks are able to learn to resist. ideally, we should also use model families that allow a fast approximate inference rule. we can think of any form of modification parametrized by a vector µ as training an ensemble consisting of p ( y, | x µ ) for all possible values of µ. there is no requirement that µ have a finite number of values. for example, µ can be real - valued. srivastava 2014 et al. ( ) showed that multiplying the weights by µ [UNK] ( 1, i ) can outperform dropout based on binary masks. because e [ µ ] = 1 the standard network automatically implements approximate inference in the ensemble, without needing any weight scaling. so far we have described dropout purely as a means of performing [UNK], approximate bagging. however, there is another view of dropout that goes further than this. dropout trains
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 282
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
implements approximate inference in the ensemble, without needing any weight scaling. so far we have described dropout purely as a means of performing [UNK], approximate bagging. however, there is another view of dropout that goes further than this. dropout trains not just a bagged ensemble of models, but an ensemble of models that share hidden units. this means each hidden unit must be able to perform well regardless of which other hidden units are in the model. hidden units must be prepared to be swapped and interchanged between models. hinton et al. ( ) were inspired by an idea from biology : sexual reproduction, which involves 2012c swapping genes between two [UNK] organisms, creates evolutionary pressure for genes to become not just good, but to become readily swapped between [UNK] organisms. such genes and such features are very robust to changes in their environment because they are not able to incorrectly adapt to unusual features of any one organism or model. dropout thus regularizes each hidden unit to be not merely a good feature but a feature that is good in many contexts. warde - farley 2014 et al. ( ) compared dropout training to training of large ensembles and concluded that dropout [UNK] additional improvements to generalization error beyond those obtained by ensembles of independent models. it is important to
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 282
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
good in many contexts. warde - farley 2014 et al. ( ) compared dropout training to training of large ensembles and concluded that dropout [UNK] additional improvements to generalization error beyond those obtained by ensembles of independent models. it is important to understand that a large portion of the power of dropout arises from the fact that the masking noise is applied to the hidden units. this can be seen as a form of highly intelligent, adaptive destruction of the information content of the input rather than destruction of the raw values of the input. for example, if the model learns a hidden unit hi that detects a face by finding the nose, then dropping hi corresponds to erasing the information that there is a nose in the image. the model must learn another hi, either that redundantly encodes the presence of a nose, or that detects the face by another feature, such as the mouth. traditional noise injection techniques that add unstructured noise at the input are not able to randomly erase the information about a nose from an image of a face unless the magnitude of the noise is so great that nearly all of the information in 267
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 282
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 7. regularization for deep learning the image is removed. destroying extracted features rather than original values allows the destruction process to make use of all of the knowledge about the input distribution that the model has acquired so far. another important aspect of dropout is that the noise is multiplicative. if the noise were additive with fixed scale, then a rectified linear hidden unit hi with added noise could simply learn to have hi become very large in order to make the added noise insignificant by comparison. multiplicative noise does not allow such a pathological solution to the noise robustness problem. another deep learning algorithm, batch normalization, reparametrizes the model in a way that introduces both additive and multiplicative noise on the hidden units at training time. the primary purpose of batch normalization is to improve optimization, but the noise can have a regularizing [UNK], and sometimes makes dropout unnecessary. batch normalization is described further in section. 8. 7. 1 7. 13 adversarial training in many cases, neural networks have begun to reach human performance when evaluated on an i. i. d. test set. it is natural therefore to wonder whether these models have obtained a true human - level understanding of
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 283
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
. 13 adversarial training in many cases, neural networks have begun to reach human performance when evaluated on an i. i. d. test set. it is natural therefore to wonder whether these models have obtained a true human - level understanding of these tasks. in order to probe the level of understanding a network has of the underlying task, we can search for examples that the model misclassifies. ( ) found that szegedy et al. 2014b even neural networks that perform at human level accuracy have a nearly 100 % error rate on examples that are intentionally constructed by using an optimization procedure to search for an input xnear a data point x such that the model output is very [UNK] at x. in many cases, xcan be so similar to x that a human observer cannot tell the [UNK] between the original example and the adversarial example, but the network can make highly [UNK] predictions. see figure for an example. 7. 8 adversarial examples have many implications, for example, in computer security, that are beyond the scope of this chapter. however, they are interesting in the context of regularization because one can reduce the error rate on the original i. i. d. test set via adversarial
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 283
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
for example, in computer security, that are beyond the scope of this chapter. however, they are interesting in the context of regularization because one can reduce the error rate on the original i. i. d. test set via adversarial training — training on adversarially perturbed examples from the training set (, ; szegedy et al. 2014b goodfellow 2014b et al., ). goodfellow 2014b et al. ( ) showed that one of the primary causes of these adversarial examples is excessive linearity. neural networks are built out of primarily linear building blocks. in some experiments the overall function they implement proves to be highly linear as a result. these linear functions are easy 268
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 283
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 7. regularization for deep learning +. 007 × = x sign ( ∇xj ( θ x,, y ) ) x + sign ( ∇xj ( θ x,, y ) ) y = “ panda ” “ nematode ” “ gibbon ” w / 57. 7 % confidence w / 8. 2 % confidence w / 99. 3 % confidence figure 7. 8 : a demonstration of adversarial example generation applied to googlenet (, ) on imagenet. by adding an imperceptibly small vector whose szegedy et al. 2014a elements are equal to the sign of the elements of the gradient of the cost function with respect to the input, we can change googlenet ’ s classification of the image. reproduced with permission from ( ). goodfellow et al. 2014b to optimize. unfortunately, the value of a linear function can change very rapidly if it has numerous inputs. if we change each input by, then a linear function with weights w can change by as much as | | | | w 1, which can be a very large amount if w is high - dimensional. adversarial training discourages this
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 284
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
. if we change each input by, then a linear function with weights w can change by as much as | | | | w 1, which can be a very large amount if w is high - dimensional. adversarial training discourages this highly sensitive locally linear behavior by encouraging the network to be locally constant in the neighborhood of the training data. this can be seen as a way of explicitly introducing a local constancy prior into supervised neural nets. adversarial training helps to illustrate the power of using a large function family in combination with aggressive regularization. purely linear models, like logistic regression, are not able to resist adversarial examples because they are forced to be linear. neural networks are able to represent functions that can range from nearly linear to nearly locally constant and thus have the flexibility to capture linear trends in the training data while still learning to resist local perturbation. adversarial examples also provide a means of accomplishing semi - supervised learning. at a point x that is not associated with a label in the dataset, the model itself assigns some label [UNK]. the model ’ s label [UNK] may not be the true label, but if the model is high quality, then [UNK] has a high probability of providing the true
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 284
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
is not associated with a label in the dataset, the model itself assigns some label [UNK]. the model ’ s label [UNK] may not be the true label, but if the model is high quality, then [UNK] has a high probability of providing the true label. we can seek an adversarial example xthat causes the classifier to output a label ywith y = [UNK]. adversarial examples generated using not the true label but a label provided by a trained model are called virtual adversarial examples ( miyato 2015 et al., ). the classifier may then be trained to assign the same label to x and x. this encourages the classifier to learn a function that is 269
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 284
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 7. regularization for deep learning robust to small changes anywhere along the manifold where the unlabeled data lies. the assumption motivating this approach is that [UNK] classes usually lie on disconnected manifolds, and a small perturbation should not be able to jump from one class manifold to another class manifold. 7. 14 tangent distance, tangent prop, and manifold tangent classifier many machine learning algorithms aim to overcome the curse of dimensionality by assuming that the data lies near a low - dimensional manifold, as described in section. 5. 11. 3 one of the early attempts to take advantage of the manifold hypothesis is the tangent distance algorithm (,, ). it is a non - parametric simard et al. 1993 1998 nearest - neighbor algorithm in which the metric used is not the generic euclidean distance but one that is derived from knowledge of the manifolds near which probability concentrates. it is assumed that we are trying to classify examples and that examples on the same manifold share the same category. since the classifier should be invariant to the local factors of variation that correspond to movement on the manifold, it would make sense to use as nearest - neighbor distance between points x1 and x2 the distance between the manifolds m1 and m2 to
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 285
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
the classifier should be invariant to the local factors of variation that correspond to movement on the manifold, it would make sense to use as nearest - neighbor distance between points x1 and x2 the distance between the manifolds m1 and m2 to which they respectively belong. although that may be computationally [UNK] ( it would require solving an optimization problem, to find the nearest pair of points on m1 and m2 ), a cheap alternative that makes sense locally is to approximate mi by its tangent plane at xi and measure the distance between the two tangents, or between a tangent plane and a point. that can be achieved by solving a low - dimensional linear system ( in the dimension of the manifolds ). of course, this algorithm requires one to specify the tangent vectors. in a related spirit, the tangent prop algorithm (, ) ( figure ) simard et al. 1992 7. 9 trains a neural net classifier with an extra penalty to make each output f ( x ) of the neural net locally invariant to known factors of variation. these factors of variation correspond to movement along the manifold near which examples of the same class concentrate. local invariance is achieved by requiring ∇xf ( x ) to be orthogonal to the known manifold tangent vectors
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 285
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
locally invariant to known factors of variation. these factors of variation correspond to movement along the manifold near which examples of the same class concentrate. local invariance is achieved by requiring ∇xf ( x ) to be orthogonal to the known manifold tangent vectors v ( ) i at x, or equivalently that the directional derivative of f at x in the directions v ( ) i be small by adding a regularization penalty : ω ω ( ) = f i ( ∇xf ( ) ) x v ( ) i 2. ( 7. 67 ) 270
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 285
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 7. regularization for deep learning this regularizer can of course be scaled by an appropriate hyperparameter, and, for most neural networks, we would need to sum over many outputs rather than the lone output f ( x ) described here for simplicity. as with the tangent distance algorithm, the tangent vectors are derived a priori, usually from the formal knowledge of the [UNK] of transformations such as translation, rotation, and scaling in images. tangent prop has been used not just for supervised learning (, ) simard et al. 1992 but also in the context of reinforcement learning (, ). thrun 1995 tangent propagation is closely related to dataset augmentation. in both cases, the user of the algorithm encodes his or her prior knowledge of the task by specifying a set of transformations that should not alter the output of the network. the [UNK] is that in the case of dataset augmentation, the network is explicitly trained to correctly classify distinct inputs that were created by applying more than an infinitesimal amount of these transformations. tangent propagation does not require explicitly visiting a new input point. instead, it analytically regularizes the model to resist perturbation in the directions corresponding to the specified transformation. while this analytical approach
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 286
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##tesimal amount of these transformations. tangent propagation does not require explicitly visiting a new input point. instead, it analytically regularizes the model to resist perturbation in the directions corresponding to the specified transformation. while this analytical approach is intellectually elegant, it has two major drawbacks. first, it only regularizes the model to resist infinitesimal perturbation. explicit dataset augmentation confers resistance to larger perturbations. second, the infinitesimal approach poses [UNK] for models based on rectified linear units. these models can only shrink their derivatives by turning units [UNK] shrinking their weights. they are not able to shrink their derivatives by saturating at a high value with large weights, as sigmoid or tanh units can. dataset augmentation works well with rectified linear units because [UNK] subsets of rectified units can activate for [UNK] transformed versions of each original input. tangent propagation is also related to double backprop ( drucker and lecun, 1992 ) and adversarial training (, ;, ). szegedy et al. 2014b goodfellow et al. 2014b double backprop regular
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 286
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
to double backprop ( drucker and lecun, 1992 ) and adversarial training (, ;, ). szegedy et al. 2014b goodfellow et al. 2014b double backprop regularizes the jacobian to be small, while adversarial training finds inputs near the original inputs and trains the model to produce the same output on these as on the original inputs. tangent propagation and dataset augmentation using manually specified transformations both require that the model should be invariant to certain specified directions of change in the input. double backprop and adversarial training both require that the model should be invariant to directions of change in the input so long as the change is small. just all as dataset augmentation is the non - infinitesimal version of tangent propagation, adversarial training is the non - infinitesimal version of double backprop. the manifold tangent classifier (, ), eliminates the need to rifai et al. 2011c know the tangent vectors a priori. as we will see in chapter, autoencoders can 14 271
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 286
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 7. regularization for deep learning x1 x2 normal tangent figure 7. 9 : illustration of the main idea of the tangent prop algorithm (, simard et al. 1992 rifai 2011c ) and manifold tangent classifier ( et al., ), which both regularize the classifier output function f ( x ). each curve represents the manifold for a [UNK] class, illustrated here as a one - dimensional manifold embedded in a two - dimensional space. on one curve, we have chosen a single point and drawn a vector that is tangent to the class manifold ( parallel to and touching the manifold ) and a vector that is normal to the class manifold ( orthogonal to the manifold ). in multiple dimensions there may be many tangent directions and many normal directions. we expect the classification function to change rapidly as it moves in the direction normal to the manifold, and not to change as it moves along the class manifold. both tangent propagation and the manifold tangent classifier regularize f ( x ) to not change very much asx moves along the manifold. tangent propagation requires the user to manually specify functions that compute the tangent directions ( such as specifying that small translations of images remain in the same class manifold ) while the manifold
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 287
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
f ( x ) to not change very much asx moves along the manifold. tangent propagation requires the user to manually specify functions that compute the tangent directions ( such as specifying that small translations of images remain in the same class manifold ) while the manifold tangent classifier estimates the manifold tangent directions by training an autoencoder to fit the training data. the use of autoencoders to estimate manifolds will be described in chapter. 14 estimate the manifold tangent vectors. the manifold tangent classifier makes use of this technique to avoid needing user - specified tangent vectors. as illustrated in figure, these estimated tangent vectors go beyond the classical invariants 14. 10 that arise out of the geometry of images ( such as translation, rotation and scaling ) and include factors that must be learned because they are object - specific ( such as moving body parts ). the algorithm proposed with the manifold tangent classifier is therefore simple : ( 1 ) use an autoencoder to learn the manifold structure by unsupervised learning, and ( 2 ) use these tangents to regularize a neural net classifier as in tangent prop ( equation ). 7. 67 this chapter has described most of the
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 287
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##r to learn the manifold structure by unsupervised learning, and ( 2 ) use these tangents to regularize a neural net classifier as in tangent prop ( equation ). 7. 67 this chapter has described most of the general strategies used to regularize neural networks. regularization is a central theme of machine learning and as such 272
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 287
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 7. regularization for deep learning will be revisited periodically by most of the remaining chapters. another central theme of machine learning is optimization, described next. 273
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 288
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 8 optimization for training deep models deep learning algorithms involve optimization in many contexts. for example, performing inference in models such as pca involves solving an optimization problem. we often use analytical optimization to write proofs or design algorithms. of all of the many optimization problems involved in deep learning, the most [UNK] is neural network training. it is quite common to invest days to months of time on hundreds of machines in order to solve even a single instance of the neural network training problem. because this problem is so important and so expensive, a specialized set of optimization techniques have been developed for solving it. this chapter presents these optimization techniques for neural network training. if you are unfamiliar with the basic principles of gradient - based optimization, we suggest reviewing chapter. that chapter includes a brief overview of numerical 4 optimization in general. this chapter focuses on one particular case of optimization : finding the param - eters θ of a neural network that significantly reduce a cost function j ( θ ), which typically includes a performance measure evaluated on the entire training set as well as additional regularization terms. we begin with a description of how optimization used as a training algorithm for a machine learning task [UNK] from pure optimization. next, we present several of the concrete challenges that make optimization of
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 289
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
evaluated on the entire training set as well as additional regularization terms. we begin with a description of how optimization used as a training algorithm for a machine learning task [UNK] from pure optimization. next, we present several of the concrete challenges that make optimization of neural networks [UNK]. we then define several practical algorithms, including both optimization algorithms themselves and strategies for initializing the parameters. more advanced algorithms adapt their learning rates during training or leverage information contained in 274
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 289
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 8. optimization for training deep models the second derivatives of the cost function. finally, we conclude with a review of several optimization strategies that are formed by combining simple optimization algorithms into higher - level procedures. 8. 1 how learning [UNK] from pure optimization optimization algorithms used for training of deep models [UNK] from traditional optimization algorithms in several ways. machine learning usually acts indirectly. in most machine learning scenarios, we care about some performance measure p, that is defined with respect to the test set and may also be intractable. we therefore optimize p only indirectly. we reduce a [UNK] cost function j ( θ ) in the hope that doing so will improve p. this is in contrast to pure optimization, where minimizing j is a goal in and of itself. optimization algorithms for training deep models also typically include some specialization on the specific structure of machine learning objective functions. typically, the cost function can be written as an average over the training set, such as j ( ) = θ e ( ) [UNK] x, y [UNK] f, y, ( ( ; ) x θ ) ( 8. 1 ) where l is the per - example loss function, f ( x ; θ ) is the predicted output when the input is x, [UNK] is the empirical
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 290
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
x, y [UNK] f, y, ( ( ; ) x θ ) ( 8. 1 ) where l is the per - example loss function, f ( x ; θ ) is the predicted output when the input is x, [UNK] is the empirical distribution. in the supervised learning case, y is the target output. throughout this chapter, we develop the unregularized supervised case, where the arguments to l are f ( x ; θ ) and y. however, it is trivial to extend this development, for example, to include θ or x as arguments, or to exclude y as arguments, in order to develop various forms of regularization or unsupervised learning. equation defines an objective function with respect to the training set. we 8. 1 would usually prefer to minimize the corresponding objective function where the expectation is taken across the data generating distribution pdata rather than just over the finite training set : j∗ ( ) = θ e ( ) x, y [UNK] f, y. ( ( ; ) x θ ) ( 8. 2 ) 8. 1. 1 empirical risk minimization the goal of a machine learning algorithm is to reduce the expected generalization error given by equation. this quantity is known as the 8. 2
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 290
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
( ( ; ) x θ ) ( 8. 2 ) 8. 1. 1 empirical risk minimization the goal of a machine learning algorithm is to reduce the expected generalization error given by equation. this quantity is known as the 8. 2 risk. we emphasize here that the expectation is taken over the true underlying distribution pdata. if we knew the true distribution pdata ( x, y ), risk minimization would be an optimization task 275
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 290
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 8. optimization for training deep models solvable by an optimization algorithm. however, when we do not know pdata ( x, y ) but only have a training set of samples, we have a machine learning problem. the simplest way to convert a machine learning problem back into an op - timization problem is to minimize the expected loss on the training set. this means replacing the true distribution p ( x, y ) with the empirical distribution [UNK] ( x, y ) defined by the training set. we now minimize the empirical risk ex, [UNK] ( ) x, y [ ( ( ; ) ) ] = l f x θ, y 1 m m i = 1 l f ( ( x ( ) i ; ) θ, y ( ) i ) ( 8. 3 ) where is the number of training examples. m the training process based on minimizing this average training error is known as empirical risk minimization. in this setting, machine learning is still very similar to straightforward optimization. rather than optimizing the risk directly, we optimize the empirical risk, and hope that the risk decreases significantly as well. a variety of theoretical results establish conditions under which the true risk can be expected to decrease by various amounts. however
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 291
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##imizing the risk directly, we optimize the empirical risk, and hope that the risk decreases significantly as well. a variety of theoretical results establish conditions under which the true risk can be expected to decrease by various amounts. however, empirical risk minimization is prone to overfitting. models with high capacity can simply memorize the training set. in many cases, empirical risk minimization is not really feasible. the most [UNK] modern optimization algorithms are based on gradient descent, but many useful loss functions, such as 0 - 1 loss, have no useful derivatives ( the derivative is either zero or undefined everywhere ). these two problems mean that, in the context of deep learning, we rarely use empirical risk minimization. instead, we must use a slightly [UNK] approach, in which the quantity that we actually optimize is even more [UNK] from the quantity that we truly want to optimize. 8. 1. 2 surrogate loss functions and early stopping sometimes, the loss function we actually care about ( say classification error ) is not one that can be optimized [UNK]. for example, exactly minimizing expected 0 - 1 loss is typically intractable ( exponential in the input dimension ), even for
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 291
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
function we actually care about ( say classification error ) is not one that can be optimized [UNK]. for example, exactly minimizing expected 0 - 1 loss is typically intractable ( exponential in the input dimension ), even for a linear classifier ( marcotte and savard 1992, ). in such situations, one typically optimizes a surrogate loss function instead, which acts as a proxy but has advantages. for example, the negative log - likelihood of the correct class is typically used as a surrogate for the 0 - 1 loss. the negative log - likelihood allows the model to estimate the conditional probability of the classes, given the input, and if the model can do that well, then it can pick the classes that yield the least classification error in expectation. 276
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 291
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 8. optimization for training deep models in some cases, a surrogate loss function actually results in being able to learn more. for example, the test set 0 - 1 loss often continues to decrease for a long time after the training set 0 - 1 loss has reached zero, when training using the log - likelihood surrogate. this is because even when the expected 0 - 1 loss is zero, one can improve the robustness of the classifier by further pushing the classes apart from each other, obtaining a more confident and reliable classifier, thus extracting more information from the training data than would have been possible by simply minimizing the average 0 - 1 loss on the training set. a very important [UNK] between optimization in general and optimization as we use it for training algorithms is that training algorithms do not usually halt at a local minimum. instead, a machine learning algorithm usually minimizes a surrogate loss function but halts when a convergence criterion based on early stopping ( section ) is satisfied. typically the early stopping criterion is based 7. 8 on the true underlying loss function, such as 0 - 1 loss measured on a validation set, and is designed to cause the algorithm to halt whenever overfitting begins to occur. training
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 292
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##fied. typically the early stopping criterion is based 7. 8 on the true underlying loss function, such as 0 - 1 loss measured on a validation set, and is designed to cause the algorithm to halt whenever overfitting begins to occur. training often halts while the surrogate loss function still has large derivatives, which is very [UNK] from the pure optimization setting, where an optimization algorithm is considered to have converged when the gradient becomes very small. 8. 1. 3 batch and minibatch algorithms one aspect of machine learning algorithms that separates them from general optimization algorithms is that the objective function usually decomposes as a sum over the training examples. optimization algorithms for machine learning typically compute each update to the parameters based on an expected value of the cost function estimated using only a subset of the terms of the full cost function. for example, maximum likelihood estimation problems, when viewed in log space, decompose into a sum over each example : θml = arg max θ m i = 1 log pmodel ( x ( ) i, y ( ) i ; ) θ. ( 8. 4 ) maximizing this sum is equivalent to maximizing the expectation over the empirical distribution defined by the training set : j ( ) = θ ex, [UNK]
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 292
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
( x ( ) i, y ( ) i ; ) θ. ( 8. 4 ) maximizing this sum is equivalent to maximizing the expectation over the empirical distribution defined by the training set : j ( ) = θ ex, [UNK] log pmodel ( ; ) x, y θ. ( 8. 5 ) most of the properties of the objective function j used by most of our opti - mization algorithms are also expectations over the training set. for example, the 277
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 292
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 8. optimization for training deep models most commonly used property is the gradient : ∇θj ( ) = θ ex, [UNK] log pmodel ( ; ) x, y θ. ( 8. 6 ) computing this expectation exactly is very expensive because it requires evaluating the model on every example in the entire dataset. in practice, we can compute these expectations by randomly sampling a small number of examples from the dataset, then taking the average over only those examples. recall that the standard error of the mean ( equation ) estimated from 5. 46 n samples is given by σ / √n, where σ is the true standard deviation of the value of the samples. the denominator of √n shows that there are less than linear returns to using more examples to estimate the gradient. compare two hypothetical estimates of the gradient, one based on 100 examples and another based on 10, 000 examples. the latter requires 100 times more computation than the former, but reduces the standard error of the mean only by a factor of 10. most optimization algorithms converge much faster ( in terms of total computation, not in terms of number of updates ) if they are allowed to rapidly compute approximate estimates of the gradient rather than slowly computing the exact gradient. another consideration motivating statistical estimation of
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 293
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
. most optimization algorithms converge much faster ( in terms of total computation, not in terms of number of updates ) if they are allowed to rapidly compute approximate estimates of the gradient rather than slowly computing the exact gradient. another consideration motivating statistical estimation of the gradient from a small number of samples is redundancy in the training set. in the worst case, all m samples in the training set could be identical copies of each other. a sampling - based estimate of the gradient could compute the correct gradient with a single sample, using m times less computation than the naive approach. in practice, we are unlikely to truly encounter this worst - case situation, but we may find large numbers of examples that all make very similar contributions to the gradient. optimization algorithms that use the entire training set are called batch or deterministic gradient methods, because they process all of the training examples simultaneously in a large batch. this terminology can be somewhat confusing because the word “ batch ” is also often used to describe the minibatch used by minibatch stochastic gradient descent. typically the term “ batch gradient descent ” implies the use of the full training set, while the use of the term “ batch ” to describe a group of examples does not. for example, it is very common
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 293
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##batch stochastic gradient descent. typically the term “ batch gradient descent ” implies the use of the full training set, while the use of the term “ batch ” to describe a group of examples does not. for example, it is very common to use the term “ batch size ” to describe the size of a minibatch. optimization algorithms that use only a single example at a time are sometimes called stochastic or sometimes online methods. the term online is usually reserved for the case where the examples are drawn from a stream of continually created examples rather than from a fixed - size training set over which several passes are made. most algorithms used for deep learning fall somewhere in between, using more 278
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 293
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 8. optimization for training deep models than one but less than all of the training examples. these were traditionally called minibatch or minibatch stochastic methods and it is now common to simply call them stochastic methods. the canonical example of a stochastic method is stochastic gradient descent, presented in detail in section. 8. 3. 1 minibatch sizes are generally driven by the following factors : • larger batches provide a more accurate estimate of the gradient, but with less than linear returns. • multicore architectures are usually underutilized by extremely small batches. this motivates using some absolute minimum batch size, below which there is no reduction in the time to process a minibatch. • if all examples in the batch are to be processed in parallel ( as is typically the case ), then the amount of memory scales with the batch size. for many hardware setups this is the limiting factor in batch size. • some kinds of hardware achieve better runtime with specific sizes of arrays. especially when using gpus, it is common for power of 2 batch sizes to [UNK] better runtime. typical power of 2 batch sizes range from 32 to 256, with 16 sometimes being attempted for large models.
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 294
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
specific sizes of arrays. especially when using gpus, it is common for power of 2 batch sizes to [UNK] better runtime. typical power of 2 batch sizes range from 32 to 256, with 16 sometimes being attempted for large models. • small batches can [UNK] a regularizing [UNK] (, ), wilson and martinez 2003 perhaps due to the noise they add to the learning process. generalization error is often best for a batch size of 1. training with such a small batch size might require a small learning rate to maintain stability due to the high variance in the estimate of the gradient. the total runtime can be very high due to the need to make more steps, both because of the reduced learning rate and because it takes more steps to observe the entire training set. [UNK] kinds of algorithms use [UNK] kinds of information from the mini - batch in [UNK] ways. some algorithms are more sensitive to sampling error than others, either because they use information that is [UNK] to estimate accurately with few samples, or because they use information in ways that amplify sampling errors more. methods that compute updates based only on the gradient g are usually relatively robust and can handle smaller batch sizes like 100. second - order methods, which use also the hessian matrix h and compute updates
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 294
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
information in ways that amplify sampling errors more. methods that compute updates based only on the gradient g are usually relatively robust and can handle smaller batch sizes like 100. second - order methods, which use also the hessian matrix h and compute updates such as h−1g, typically require much larger batch sizes like 10, 000. these large batch sizes are required to minimize fluctuations in the estimates of h−1g. suppose that h is estimated perfectly but has a poor condition number. multiplication by 279
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 294
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 8. optimization for training deep models h or its inverse amplifies pre - existing errors, in this case, estimation errors in g. very small changes in the estimate of g can thus cause large changes in the update h−1g, even if h were estimated perfectly. of course, h will be estimated only approximately, so the update h−1g will contain even more error than we would predict from applying a poorly conditioned operation to the estimate of. g it is also crucial that the minibatches be selected randomly. computing an unbiased estimate of the expected gradient from a set of samples requires that those samples be independent. we also wish for two subsequent gradient estimates to be independent from each other, so two subsequent minibatches of examples should also be independent from each other. many datasets are most naturally arranged in a way where successive examples are highly correlated. for example, we might have a dataset of medical data with a long list of blood sample test results. this list might be arranged so that first we have five blood samples taken at [UNK] times from the first patient, then we have three blood samples taken from the second patient, then the blood samples from the third patient, and so on. if we were to draw examples
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 295
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
that first we have five blood samples taken at [UNK] times from the first patient, then we have three blood samples taken from the second patient, then the blood samples from the third patient, and so on. if we were to draw examples in order from this list, then each of our minibatches would be extremely biased, because it would represent primarily one patient out of the many patients in the dataset. in cases such as these where the order of the dataset holds some significance, it is necessary to [UNK] the examples before selecting minibatches. for very large datasets, for example datasets containing billions of examples in a data center, it can be impractical to sample examples truly uniformly at random every time we want to construct a minibatch. fortunately, in practice it is usually [UNK] to [UNK] the order of the dataset once and then store it in [UNK] fashion. this will impose a fixed set of possible minibatches of consecutive examples that all models trained thereafter will use, and each individual model will be forced to reuse this ordering every time it passes through the training data. however, this deviation from true random selection does not seem to have a significant detrimental [UNK]. failing to
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 295
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
trained thereafter will use, and each individual model will be forced to reuse this ordering every time it passes through the training data. however, this deviation from true random selection does not seem to have a significant detrimental [UNK]. failing to ever [UNK] the examples in any way can seriously reduce the [UNK] of the algorithm. many optimization problems in machine learning decompose over examples well enough that we can compute entire separate updates over [UNK] examples in parallel. in other words, we can compute the update that minimizes j ( x ) for one minibatch of examples x at the same time that we compute the update for several other minibatches. such asynchronous parallel distributed approaches are discussed further in section. 12. 1. 3 an interesting motivation for minibatch stochastic gradient descent is that it follows the gradient of the true generalization error ( equation ) so long as no 8. 2 examples are repeated. most implementations of minibatch stochastic gradient 280
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 295
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 8. optimization for training deep models descent [UNK] the dataset once and then pass through it multiple times. on the first pass, each minibatch is used to compute an unbiased estimate of the true generalization error. on the second pass, the estimate becomes biased because it is formed by re - sampling values that have already been used, rather than obtaining new fair samples from the data generating distribution. the fact that stochastic gradient descent minimizes generalization error is easiest to see in the online learning case, where examples or minibatches are drawn from a stream of data. in other words, instead of receiving a fixed - size training set, the learner is similar to a living being who sees a new example at each instant, with every example ( x, y ) coming from the data generating distribution p data ( x, y ). in this scenario, examples are never repeated ; every experience is a fair sample from pdata. the equivalence is easiest to derive when both x and y are discrete. in this case, the generalization error ( equation ) can be written as a sum 8. 2 j∗ ( ) = θ x y pdata ( ) ( ( ; ) ) x, y l f x θ, y,
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 296
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
are discrete. in this case, the generalization error ( equation ) can be written as a sum 8. 2 j∗ ( ) = θ x y pdata ( ) ( ( ; ) ) x, y l f x θ, y, ( 8. 7 ) with the exact gradient g = ∇θj ∗ ( ) = θ x y pdata ( ) x, y ∇θl f, y. ( ( ; ) x θ ) ( 8. 8 ) we have already seen the same fact demonstrated for the log - likelihood in equa - tion and equation ; we observe now that this holds for other functions 8. 5 8. 6 l besides the likelihood. a similar result can be derived when x and y are continuous, under mild assumptions regarding pdata and. l hence, we can obtain an unbiased estimator of the exact gradient of the generalization error by sampling a minibatch of examples { x ( 1 ),... x ( ) m } with cor - responding targets y ( ) i from the data generating distribution pdata, and computing the gradient of the loss with respect to the parameters for that minibatch : [UNK] = 1 m∇θ i l f ( ( x ( ) i ;
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 296
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##r - responding targets y ( ) i from the data generating distribution pdata, and computing the gradient of the loss with respect to the parameters for that minibatch : [UNK] = 1 m∇θ i l f ( ( x ( ) i ; ) θ, y ( ) i ). ( 8. 9 ) updating in the direction of θ [UNK] performs sgd on the generalization error. of course, this interpretation only applies when examples are not reused. nonetheless, it is usually best to make several passes through the training set, unless the training set is extremely large. when multiple such epochs are used, only the first epoch follows the unbiased gradient of the generalization error, but 281
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 296
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 8. optimization for training deep models of course, the additional epochs usually provide enough benefit due to decreased training error to [UNK] the harm they cause by increasing the gap between training error and test error. with some datasets growing rapidly in size, faster than computing power, it is becoming more common for machine learning applications to use each training example only once or even to make an incomplete pass through the training set. when using an extremely large training set, overfitting is not an issue, so underfitting and computational [UNK] become the predominant concerns. see also ( ) for a discussion of the [UNK] of computational bottou and bousquet 2008 bottlenecks on generalization error, as the number of training examples grows. 8. 2 challenges in neural network optimization optimization in general is an extremely [UNK] task. traditionally, machine learning has avoided the [UNK] of general optimization by carefully designing the objective function and constraints to ensure that the optimization problem is convex. when training neural networks, we must confront the general non - convex case. even convex optimization is not without its complications. in this section, we summarize several of the most prominent challenges involved in optimization for training deep models. 8. 2. 1 ill - conditioning some challenges arise even when optimi
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 297
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
- convex case. even convex optimization is not without its complications. in this section, we summarize several of the most prominent challenges involved in optimization for training deep models. 8. 2. 1 ill - conditioning some challenges arise even when optimizing convex functions. of these, the most prominent is ill - conditioning of the hessian matrix h. this is a very general problem in most numerical optimization, convex or otherwise, and is described in more detail in section. 4. 3. 1 the ill - conditioning problem is generally believed to be present in neural network training problems. ill - conditioning can manifest by causing sgd to get “ stuck ” in the sense that even very small steps increase the cost function. recall from equation that a second - order taylor series expansion of the 4. 9 cost function predicts that a gradient descent step of will add −g 1 22ghg g −g ( 8. 10 ) to the cost. ill - conditioning of the gradient becomes a problem when 1 22ghg exceeds gg. to determine whether ill - conditioning is detrimental to a neural network training task, one can monitor the squared gradient norm gg and 282
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 297
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 8. optimization for training deep models −50 0 50 100 150 200 250 training time ( epochs ) −2 0 2 4 6 8 10 12 14 16 gradient norm 0 50 100 150 200 250 training time ( epochs ) 0 1. 0 2. 0 3. 0 4. 0 5. 0 6. 0 7. 0 8. 0 9. 1 0. classification error rate figure 8. 1 : gradient descent often does not arrive at a critical point of any kind. in this example, the gradient norm increases throughout training of a convolutional network used for object detection. ( left ) a scatterplot showing how the norms of individual gradient evaluations are distributed over time. to improve legibility, only one gradient norm is plotted per epoch. the running average of all gradient norms is plotted as a solid curve. the gradient norm clearly increases over time, rather than decreasing as we would expect if the training process converged to a critical point. despite the increasing ( right ) gradient, the training process is reasonably successful. the validation set classification error decreases to a low level. the ghg term. in many cases, the gradient norm does not shrink significantly throughout learning, but the g
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 298
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
) gradient, the training process is reasonably successful. the validation set classification error decreases to a low level. the ghg term. in many cases, the gradient norm does not shrink significantly throughout learning, but the ghg term grows by more than an order of magnitude. the result is that learning becomes very slow despite the presence of a strong gradient because the learning rate must be shrunk to compensate for even stronger curvature. figure shows an example of the gradient increasing significantly 8. 1 during the successful training of a neural network. though ill - conditioning is present in other settings besides neural network training, some of the techniques used to combat it in other contexts are less applicable to neural networks. for example, newton ’ s method is an excellent tool for minimizing convex functions with poorly conditioned hessian matrices, but in the subsequent sections we will argue that newton ’ s method requires significant modification before it can be applied to neural networks. 8. 2. 2 local minima one of the most prominent features of a convex optimization problem is that it can be reduced to the problem of finding a local minimum. any local minimum is 283
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 298
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 8. optimization for training deep models guaranteed to be a global minimum. some convex functions have a flat region at the bottom rather than a single global minimum point, but any point within such a flat region is an acceptable solution. when optimizing a convex function, we know that we have reached a good solution if we find a critical point of any kind. with non - convex functions, such as neural nets, it is possible to have many local minima. indeed, nearly any deep model is essentially guaranteed to have an extremely large number of local minima. however, as we will see, this is not necessarily a major problem. neural networks and any models with multiple equivalently parametrized latent variables all have multiple local minima because of the model identifiability problem. a model is said to be identifiable if a [UNK] large training set can rule out all but one setting of the model ’ s parameters. models with latent variables are often not identifiable because we can obtain equivalent models by exchanging latent variables with each other. for example, we could take a neural network and modify layer 1 by swapping the incoming weight vector for unit i with the incoming weight vector for unit j, then
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 299
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##fiable because we can obtain equivalent models by exchanging latent variables with each other. for example, we could take a neural network and modify layer 1 by swapping the incoming weight vector for unit i with the incoming weight vector for unit j, then doing the same for the outgoing weight vectors. if we have m layers with n units each, then there are n! m ways of arranging the hidden units. this kind of non - identifiability is known as weight space symmetry. in addition to weight space symmetry, many kinds of neural networks have additional causes of non - identifiability. for example, in any rectified linear or maxout network, we can scale all of the incoming weights and biases of a unit by α if we also scale all of its outgoing weights by 1 α. this means that — if the cost function does not include terms such as weight decay that depend directly on the weights rather than the models ’ outputs — every local minimum of a rectified linear or maxout network lies on an ( m n × ) - dimensional hyperbola of equivalent local minima. these model identifiability issues mean that there can be an extremely large or even uncountably infinite amount of
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 299
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
or maxout network lies on an ( m n × ) - dimensional hyperbola of equivalent local minima. these model identifiability issues mean that there can be an extremely large or even uncountably infinite amount of local minima in a neural network cost function. however, all of these local minima arising from non - identifiability are equivalent to each other in cost function value. as a result, these local minima are not a problematic form of non - convexity. local minima can be problematic if they have high cost in comparison to the global minimum. one can construct small neural networks, even without hidden units, that have local minima with higher cost than the global minimum ( sontag and sussman 1989 brady 1989 gori and tesi 1992, ; et al., ;, ). if local minima with high cost are common, this could pose a serious problem for gradient - based optimization algorithms. it remains an open question whether there are many local minima of high cost 284
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 299
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 8. optimization for training deep models for networks of practical interest and whether optimization algorithms encounter them. for many years, most practitioners believed that local minima were a common problem plaguing neural network optimization. today, that does not appear to be the case. the problem remains an active area of research, but experts now suspect that, for [UNK] large neural networks, most local minima have a low cost function value, and that it is not important to find a true global minimum rather than to find a point in parameter space that has low but not minimal cost (, ;, ;, ; saxe et al. 2013 dauphin et al. 2014 goodfellow et al. 2015 choromanska et al., ). 2014 many practitioners attribute nearly all [UNK] with neural network optimiza - tion to local minima. we encourage practitioners to carefully test for specific problems. a test that can rule out local minima as the problem is to plot the norm of the gradient over time. if the norm of the gradient does not shrink to insignificant size, the problem is neither local minima nor any other kind of critical point. this kind of negative test can rule out local minima. in high dimensional spaces,
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 300
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
if the norm of the gradient does not shrink to insignificant size, the problem is neither local minima nor any other kind of critical point. this kind of negative test can rule out local minima. in high dimensional spaces, it can be very [UNK] to positively establish that local minima are the problem. many structures other than local minima also have small gradients. 8. 2. 3 plateaus, saddle points and other flat regions for many high - dimensional non - convex functions, local minima ( and maxima ) are in fact rare compared to another kind of point with zero gradient : a saddle point. some points around a saddle point have greater cost than the saddle point, while others have a lower cost. at a saddle point, the hessian matrix has both positive and negative eigenvalues. points lying along eigenvectors associated with positive eigenvalues have greater cost than the saddle point, while points lying along negative eigenvalues have lower value. we can think of a saddle point as being a local minimum along one cross - section of the cost function and a local maximum along another cross - section. see figure for an illustration. 4. 5 many classes of random functions exhibit the following behavior :
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 300
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
think of a saddle point as being a local minimum along one cross - section of the cost function and a local maximum along another cross - section. see figure for an illustration. 4. 5 many classes of random functions exhibit the following behavior : in low - dimensional spaces, local minima are common. in higher dimensional spaces, local minima are rare and saddle points are more common. for a function f : rn →r of this type, the expected ratio of the number of saddle points to local minima grows exponentially with n. to understand the intuition behind this behavior, observe that the hessian matrix at a local minimum has only positive eigenvalues. the hessian matrix at a saddle point has a mixture of positive and negative eigenvalues. imagine that the sign of each eigenvalue is generated by flipping a coin. in a single dimension, it is easy to obtain a local minimum by tossing a coin and getting heads once. in n - dimensional space, it is exponentially unlikely that all n coin tosses will 285
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 300
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 8. optimization for training deep models be heads. see ( ) for a review of the relevant theoretical work. dauphin et al. 2014 an amazing property of many random functions is that the eigenvalues of the hessian become more likely to be positive as we reach regions of lower cost. in our coin tossing analogy, this means we are more likely to have our coin come up heads n times if we are at a critical point with low cost. this means that local minima are much more likely to have low cost than high cost. critical points with high cost are far more likely to be saddle points. critical points with extremely high cost are more likely to be local maxima. this happens for many classes of random functions. does it happen for neural networks? ( ) showed theoretically that shallow autoencoders baldi and hornik 1989 ( feedforward networks trained to copy their input to their output, described in chapter ) with no nonlinearities have global minima and saddle points but no 14 local minima with higher cost than the global minimum. they observed without proof that these results extend to deeper networks without nonlinearities. the output of such networks is a linear function of their input, but they are useful to study as a model of nonlinear neural networks because
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 301
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
with higher cost than the global minimum. they observed without proof that these results extend to deeper networks without nonlinearities. the output of such networks is a linear function of their input, but they are useful to study as a model of nonlinear neural networks because their loss function is a non - convex function of their parameters. such networks are essentially just multiple matrices composed together. ( ) provided exact solutions saxe et al. 2013 to the complete learning dynamics in such networks and showed that learning in these models captures many of the qualitative features observed in the training of deep models with nonlinear activation functions. ( ) showed dauphin et al. 2014 experimentally that real neural networks also have loss functions that contain very many high - cost saddle points. choromanska 2014 et al. ( ) provided additional theoretical arguments, showing that another class of high - dimensional random functions related to neural networks does so as well. what are the implications of the proliferation of saddle points for training algo - rithms? for first - order optimization algorithms that use only gradient information, the situation is unclear. the gradient can often become very small near a saddle point. on the other hand, gradient descent empirically seems to be able to escape saddle points in many cases. ( ) provided visualizations of
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 301
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
that use only gradient information, the situation is unclear. the gradient can often become very small near a saddle point. on the other hand, gradient descent empirically seems to be able to escape saddle points in many cases. ( ) provided visualizations of goodfellow et al. 2015 several learning trajectories of state - of - the - art neural networks, with an example given in figure. these visualizations show a flattening of the cost function near 8. 2 a prominent saddle point where the weights are all zero, but they also show the gradient descent trajectory rapidly escaping this region. ( ) goodfellow et al. 2015 also argue that continuous - time gradient descent may be shown analytically to be repelled from, rather than attracted to, a nearby saddle point, but the situation may be [UNK] for more realistic uses of gradient descent. for newton ’ s method, it is clear that saddle points constitute a problem. 286
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 301
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 8. optimization for training deep models projection 2 of θ projection 1 of θ j ( ) θ figure 8. 2 : a visualization of the cost function of a neural network. image adapted with permission from goodfellow 2015 et al. ( ). these visualizations appear similar for feedforward neural networks, convolutional networks, and recurrent networks applied to real object recognition and natural language processing tasks. surprisingly, these visualizations usually do not show many conspicuous obstacles. prior to the success of stochastic gradient descent for training very large models beginning in roughly 2012, neural net cost function surfaces were generally believed to have much more non - convex structure than is revealed by these projections. the primary obstacle revealed by this projection is a saddle point of high cost near where the parameters are initialized, but, as indicated by the blue path, the sgd training trajectory escapes this saddle point readily. most of training time is spent traversing the relatively flat valley of the cost function, which may be due to high noise in the gradient, poor conditioning of the hessian matrix in this region, or simply the need to circumnavigate the tall “ mountain ” visible in the figure via an indirect arcing path. 287
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 302
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 8. optimization for training deep models gradient descent is designed to move “ downhill ” and is not explicitly designed to seek a critical point. newton ’ s method, however, is designed to solve for a point where the gradient is zero. without appropriate modification, it can jump to a saddle point. the proliferation of saddle points in high dimensional spaces presumably explains why second - order methods have not succeeded in replacing gradient descent for neural network training. ( ) introduced a dauphin et al. 2014 saddle - free newton method for second - order optimization and showed that it improves significantly over the traditional version. second - order methods remain [UNK] to scale to large neural networks, but this saddle - free approach holds promise if it could be scaled. there are other kinds of points with zero gradient besides minima and saddle points. there are also maxima, which are much like saddle points from the perspective of optimization — many algorithms are not attracted to them, but unmodified newton ’ s method is. maxima of many classes of random functions become exponentially rare in high dimensional space, just like minima do. there may also be wide, flat regions of constant value. in these locations, the gradient and also the hessian are all zero
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 303
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
maxima of many classes of random functions become exponentially rare in high dimensional space, just like minima do. there may also be wide, flat regions of constant value. in these locations, the gradient and also the hessian are all zero. such degenerate locations pose major problems for all numerical optimization algorithms. in a convex problem, a wide, flat region must consist entirely of global minima, but in a general optimization problem, such a region could correspond to a high value of the objective function. 8. 2. 4 [UNK] and exploding gradients neural networks with many layers often have extremely steep regions resembling [UNK], as illustrated in figure. these result from the multiplication of several 8. 3 large weights together. on the face of an extremely steep [UNK], the gradient update step can move the parameters extremely far, usually jumping [UNK] of the [UNK] altogether. 288
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 303
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 8. optimization for training deep models figure 8. 3 : the objective function for highly nonlinear deep neural networks or for recurrent neural networks often contains sharp nonlinearities in parameter space resulting from the multiplication of several parameters. these nonlinearities give rise to very high derivatives in some places. when the parameters get close to such a [UNK], a gradient descent update can catapult the parameters very far, possibly losing most of the optimization work that had been done. figure adapted with permission from pascanu et al. ( ). 2013 the [UNK] be dangerous whether we approach it from above or from below, but fortunately its most serious consequences can be avoided using the gradient clipping heuristic described in section. the basic idea is to recall that 10. 11. 1 the gradient does not specify the optimal step size, but only the optimal direction within an infinitesimal region. when the traditional gradient descent algorithm proposes to make a very large step, the gradient clipping heuristic intervenes to reduce the step size to be small enough that it is less likely to go outside the region where the gradient indicates the direction of approximately steepest descent. [UNK] structures are most common in the cost functions for recurrent neural networks, because such models involve a multiplication of many
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 304
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
size to be small enough that it is less likely to go outside the region where the gradient indicates the direction of approximately steepest descent. [UNK] structures are most common in the cost functions for recurrent neural networks, because such models involve a multiplication of many factors, with one factor for each time step. long temporal sequences thus incur an extreme amount of multiplication. 8. 2. 5 long - term dependencies another [UNK] that neural network optimization algorithms must overcome arises when the computational graph becomes extremely deep. feedforward networks with many layers have such deep computational graphs. so do recurrent networks, described in chapter, which construct very deep computational graphs 10 289
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 304
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 8. optimization for training deep models by repeatedly applying the same operation at each time step of a long temporal sequence. repeated application of the same parameters gives rise to especially pronounced [UNK]. for example, suppose that a computational graph contains a path that consists of repeatedly multiplying by a matrix w. after t steps, this is equivalent to mul - tiplying by wt. suppose that w has an eigendecomposition w = v diag ( λ ) v −1. in this simple case, it is straightforward to see that w t = v λ v diag ( ) −1t = ( ) v diag λ tv −1. ( 8. 11 ) any eigenvalues λi that are not near an absolute value of will either explode if they 1 are greater than in magnitude or vanish if they are less than in magnitude. the 1 1 vanishing and exploding gradient problem refers to the fact that gradients through such a graph are also scaled according to diag ( λ ) t. vanishing gradients make it [UNK] to know which direction the parameters should move to improve the cost function, while exploding gradients can make learning unstable. the [UNK] structures described earlier that motivate gradient clipping are an example of the exploding gradient phenomenon
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 305
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
. vanishing gradients make it [UNK] to know which direction the parameters should move to improve the cost function, while exploding gradients can make learning unstable. the [UNK] structures described earlier that motivate gradient clipping are an example of the exploding gradient phenomenon. the repeated multiplication by w at each time step described here is very similar to the power method algorithm used to find the largest eigenvalue of a matrix w and the corresponding eigenvector. from this point of view it is not surprising that xw t will eventually discard all components of x that are orthogonal to the principal eigenvector of. w recurrent networks use the same matrix w at each time step, but feedforward networks do not, so even very deep feedforward networks can largely avoid the vanishing and exploding gradient problem (, ). sussillo 2014 we defer a further discussion of the challenges of training recurrent networks until section, after recurrent networks have been described in more detail. 10. 7 8. 2. 6 inexact gradients most optimization algorithms are designed with the assumption that we have access to the exact gradient or hessian matrix. in practice, we usually only have a noisy or even biased estimate of these quantities. nearly every deep learning algorithm
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 305
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
. 6 inexact gradients most optimization algorithms are designed with the assumption that we have access to the exact gradient or hessian matrix. in practice, we usually only have a noisy or even biased estimate of these quantities. nearly every deep learning algorithm relies on sampling - based estimates at least insofar as using a minibatch of training examples to compute the gradient. in other cases, the objective function we want to minimize is actually intractable. when the objective function is intractable, typically its gradient is intractable as well. in such cases we can only approximate the gradient. these issues mostly arise 290
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 305
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 8. optimization for training deep models with the more advanced models in part. for example, contrastive divergence iii gives a technique for approximating the gradient of the intractable log - likelihood of a boltzmann machine. various neural network optimization algorithms are designed to account for imperfections in the gradient estimate. one can also avoid the problem by choosing a surrogate loss function that is easier to approximate than the true loss. 8. 2. 7 poor correspondence between local and global structure many of the problems we have discussed so far correspond to properties of the loss function at a single point — it can be [UNK] to make a single step if j ( θ ) is poorly conditioned at the current point θ, or if θ lies on a [UNK], or if θ is a saddle point hiding the opportunity to make progress downhill from the gradient. it is possible to overcome all of these problems at a single point and still perform poorly if the direction that results in the most improvement locally does not point toward distant regions of much lower cost. goodfellow 2015 et al. ( ) argue that much of the runtime of training is due to the length of the trajectory needed to arrive at the solution. figure shows that 8. 2 the learning trajectory spends most of its time tracing out a
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 306
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
goodfellow 2015 et al. ( ) argue that much of the runtime of training is due to the length of the trajectory needed to arrive at the solution. figure shows that 8. 2 the learning trajectory spends most of its time tracing out a wide arc around a mountain - shaped structure. much of research into the [UNK] of optimization has focused on whether training arrives at a global minimum, a local minimum, or a saddle point, but in practice neural networks do not arrive at a critical point of any kind. figure 8. 1 shows that neural networks often do not arrive at a region of small gradient. indeed, such critical points do not even necessarily exist. for example, the loss function −log p ( y | x ; θ ) can lack a global minimum point and instead asymptotically approach some value as the model becomes more confident. for a classifier with discrete y and p ( y | x ) provided by a softmax, the negative log - likelihood can become arbitrarily close to zero if the model is able to correctly classify every example in the training set, but it is impossible to actually reach the value of zero. likewise, a model of real values p ( y | x ) = n ( y ; f
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 306
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.