text
stringlengths 35
1.54k
| source
stringclasses 1
value | page
int64 1
800
| book
stringclasses 1
value | chunk_index
int64 0
0
|
|---|---|---|---|---|
exponentially with the depth of the forward propagation graph. this large cost would be incurred because the same computation for ∂u ( ) i ∂u ( ) j would be redone many times. to avoid such recomputation, we can think of back - propagation as a table - filling algorithm that takes advantage of storing intermediate results ∂u ( ) n ∂u ( ) i. each node in the graph has a corresponding slot in a table to store the gradient for that node. by filling in these table entries in order, 218
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 233
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 6. deep feedforward networks back - propagation avoids repeating many common subexpressions. this table - filling strategy is sometimes called. dynamic programming 6. 5. 7 example : back - propagation for mlp training as an example, we walk through the back - propagation algorithm as it is used to train a multilayer perceptron. here we develop a very simple multilayer perception with a single hidden layer. to train this model, we will use minibatch stochastic gradient descent. the back - propagation algorithm is used to compute the gradient of the cost on a single minibatch. specifically, we use a minibatch of examples from the training set formatted as a design matrix x and a vector of associated class labels y. the network computes a layer of hidden features h = max { 0, xw ( 1 ) }. to simplify the presentation we do not use biases in this model. we assume that our graph language includes a relu operation that can compute max { 0, z } element - wise. the predictions of the unnormalized log probabilities over classes are then given by hw ( 2 ). we assume that our graph language includes a cross _ entropy operation
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 234
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
operation that can compute max { 0, z } element - wise. the predictions of the unnormalized log probabilities over classes are then given by hw ( 2 ). we assume that our graph language includes a cross _ entropy operation that computes the cross - entropy between the targets y and the probability distribution defined by these unnormalized log probabilities. the resulting cross - entropy defines the cost jmle. minimizing this cross - entropy performs maximum likelihood estimation of the classifier. however, to make this example more realistic, we also include a regularization term. the total cost j j = mle + λ i, j w ( 1 ) i, j 2 + i, j w ( 2 ) i, j 2 ( 6. 56 ) consists of the cross - entropy and a weight decay term with [UNK] λ. the computational graph is illustrated in figure. 6. 11 the computational graph for the gradient of this example is large enough that it would be tedious to draw or to read. this demonstrates one of the benefits of the back - propagation algorithm, which is that it can automatically generate gradients that would be straightforward but tedious for a software engineer to derive
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 234
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
that it would be tedious to draw or to read. this demonstrates one of the benefits of the back - propagation algorithm, which is that it can automatically generate gradients that would be straightforward but tedious for a software engineer to derive manually. we can roughly trace out the behavior of the back - propagation algorithm by looking at the forward propagation graph in figure. to train, we wish 6. 11 to compute both ∇w ( 1 ) j and ∇w ( 2 ) j. there are two [UNK] paths leading backward from j to the weights : one through the cross - entropy cost, and one through the weight decay cost. the weight decay cost is relatively simple ; it will always contribute 2λw ( ) i to the gradient on w ( ) i. 219
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 234
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 6. deep feedforward networks x w ( 1 ) w ( 1 ) u ( 1 ) u ( 1 ) matmul h relu u ( 3 ) u ( 3 ) sqr u ( 4 ) u ( 4 ) sum λ u ( 7 ) u ( 7 ) w ( 2 ) w ( 2 ) u ( 2 ) u ( 2 ) matmul y j mle j mle cross _ entropy u ( 5 ) u ( 5 ) sqr u ( 6 ) u ( 6 ) sum u ( 8 ) u ( 8 ) j + × + figure 6. 11 : the computational graph used to compute the cost used to train our example of a single - layer mlp using the cross - entropy loss and weight decay. the other path through the cross - entropy cost is slightly more complicated. let g be the gradient on the unnormalized log probabilities u ( 2 ) provided by the cross _ entropy operation. the back - propagation algorithm now needs to explore two [UNK] branches. on the shorter branch, it adds h g to the gradient on w ( 2 ), using the back - propagation rule for the second argument to the matrix multiplication operation. the other branch corresponds to the longer chain descending further along the network.
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 235
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
branches. on the shorter branch, it adds h g to the gradient on w ( 2 ), using the back - propagation rule for the second argument to the matrix multiplication operation. the other branch corresponds to the longer chain descending further along the network. first, the back - propagation algorithm computes ∇hj = gw ( 2 ) using the back - propagation rule for the first argument to the matrix multiplication operation. next, the relu operation uses its back - propagation rule to zero out components of the gradient corresponding to entries of u ( 1 ) that were less than. let the result be called 0 g. the last step of the back - propagation algorithm is to use the back - propagation rule for the second argument of the operation to add matmul xgto the gradient on w ( 1 ). after these gradients have been computed, it is the responsibility of the gradient descent algorithm, or another optimization algorithm, to use these gradients to update the parameters. for the mlp, the computational cost is dominated by the cost of matrix multiplication. during the forward propagation stage, we multiply by each weight 220
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 235
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 6. deep feedforward networks matrix, resulting in o ( w ) multiply - adds, where w is the number of weights. during the backward propagation stage, we multiply by the transpose of each weight matrix, which has the same computational cost. the main memory cost of the algorithm is that we need to store the input to the nonlinearity of the hidden layer. this value is stored from the time it is computed until the backward pass has returned to the same point. the memory cost is thus o ( mnh ), where m is the number of examples in the minibatch and nh is the number of hidden units. 6. 5. 8 complications our description of the back - propagation algorithm here is simpler than the imple - mentations actually used in practice. as noted above, we have restricted the definition of an operation to be a function that returns a single tensor. most software implementations need to support operations that can return more than one tensor. for example, if we wish to compute both the maximum value in a tensor and the index of that value, it is best to compute both in a single pass through memory, so it is most [UNK] to implement this procedure as a single operation with two outputs. we have not
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 236
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
wish to compute both the maximum value in a tensor and the index of that value, it is best to compute both in a single pass through memory, so it is most [UNK] to implement this procedure as a single operation with two outputs. we have not described how to control the memory consumption of back - propagation. back - propagation often involves summation of many tensors together. in the naive approach, each of these tensors would be computed separately, then all of them would be added in a second step. the naive approach has an overly high memory bottleneck that can be avoided by maintaining a single [UNK] and adding each value to that [UNK] as it is computed. real - world implementations of back - propagation also need to handle various data types, such as 32 - bit floating point, 64 - bit floating point, and integer values. the policy for handling each of these types takes special care to design. some operations have undefined gradients, and it is important to track these cases and determine whether the gradient requested by the user is undefined. various other technicalities make real - world [UNK] more complicated. these technicalities are not insurmountable, and this chapter has described the key intellectual tools needed to compute derivatives, but it
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 236
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
the gradient requested by the user is undefined. various other technicalities make real - world [UNK] more complicated. these technicalities are not insurmountable, and this chapter has described the key intellectual tools needed to compute derivatives, but it is important to be aware that many more subtleties exist. 6. 5. 9 [UNK] outside the deep learning community the deep learning community has been somewhat isolated from the broader computer science community and has largely developed its own cultural attitudes 221
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 236
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 6. deep feedforward networks concerning how to perform [UNK]. more generally, the field of automatic [UNK] is concerned with how to compute derivatives algorithmically. the back - propagation algorithm described here is only one approach to automatic [UNK]. it is a special case of a broader class of techniques called reverse mode accumulation. other approaches evaluate the subexpressions of the chain rule in [UNK] orders. in general, determining the order of evaluation that results in the lowest computational cost is a [UNK] problem. finding the optimal sequence of operations to compute the gradient is np - complete (, ), naumann 2008 in the sense that it may require simplifying algebraic expressions into their least expensive form. for example, suppose we have variables p1, p2,..., pn representing probabilities and variables z1, z2,..., zn representing unnormalized log probabilities. suppose we define qi = exp ( zi ) i exp ( zi ), ( 6. 57 ) where we build the softmax function out of exponentiation, summation and division operations, and construct a cross - entropy loss j = − i pilog qi. a human mathematician can observe that the derivative of
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 237
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
), ( 6. 57 ) where we build the softmax function out of exponentiation, summation and division operations, and construct a cross - entropy loss j = − i pilog qi. a human mathematician can observe that the derivative of j with respect to zi takes a very simple form : qi −pi. the back - propagation algorithm is not capable of simplifying the gradient this way, and will instead explicitly propagate gradients through all of the logarithm and exponentiation operations in the original graph. some software libraries such as theano (, ;, ) are able to bergstra et al. 2010 bastien et al. 2012 perform some kinds of algebraic substitution to improve over the graph proposed by the pure back - propagation algorithm. when the forward graph g has a single output node and each partial derivative ∂u ( ) i ∂u ( ) j can be computed with a constant amount of computation, back - propagation guarantees that the number of computations for the gradient computation is of the same order as the number of computations for the forward computation : this can be seen in algorithm because each local partial derivative 6. 2 ∂u ( ) i ∂u ( ) j needs to be computed only once along with an associated
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 237
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
is of the same order as the number of computations for the forward computation : this can be seen in algorithm because each local partial derivative 6. 2 ∂u ( ) i ∂u ( ) j needs to be computed only once along with an associated multiplication and addition for the recursive chain - rule formulation ( equation ). the overall computation is 6. 49 therefore o ( # edges ). however, it can potentially be reduced by simplifying the computational graph constructed by back - propagation, and this is an np - complete task. implementations such as theano and tensorflow use heuristics based on matching known simplification patterns in order to iteratively attempt to simplify the graph. we defined back - propagation only for the computation of a gradient of a scalar output but back - propagation can be extended to compute a jacobian ( either of k [UNK] scalar nodes in the graph, or of a tensor - valued node containing k values ). a naive implementation may then need k times more computation : for 222
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 237
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 6. deep feedforward networks each scalar internal node in the original forward graph, the naive implementation computes k gradients instead of a single gradient. when the number of outputs of the graph is larger than the number of inputs, it is sometimes preferable to use another form of automatic [UNK] called forward mode accumulation. forward mode computation has been proposed for obtaining real - time computation of gradients in recurrent networks, for example (, ). this williams and zipser 1989 also avoids the need to store the values and gradients for the whole graph, trading [UNK] [UNK] for memory. the relationship between forward mode and backward mode is analogous to the relationship between left - multiplying versus right - multiplying a sequence of matrices, such as abcd, ( 6. 58 ) where the matrices can be thought of as jacobian matrices. for example, if d is a column vector while a has many rows, this corresponds to a graph with a single output and many inputs, and starting the multiplications from the end and going backwards only requires matrix - vector products. this corresponds to the backward mode. instead, starting to multiply from the left would involve a series of matrix - matrix products, which makes the whole computation much more expensive. however, if
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 238
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
the end and going backwards only requires matrix - vector products. this corresponds to the backward mode. instead, starting to multiply from the left would involve a series of matrix - matrix products, which makes the whole computation much more expensive. however, if a has fewer rows than d has columns, it is cheaper to run the multiplications left - to - right, corresponding to the forward mode. in many communities outside of machine learning, it is more common to im - plement [UNK] software that acts directly on traditional programming language code, such as python or c code, and automatically generates programs that [UNK] functions written in these languages. in the deep learning com - munity, computational graphs are usually represented by explicit data structures created by specialized libraries. the specialized approach has the drawback of requiring the library developer to define the bprop methods for every operation and limiting the user of the library to only those operations that have been defined. however, the specialized approach also has the benefit of allowing customized back - propagation rules to be developed for each operation, allowing the developer to improve speed or stability in non - obvious ways that an automatic procedure would presumably be unable to replicate. back - propagation is therefore not the only way or the optimal way of computing the
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 238
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
- propagation rules to be developed for each operation, allowing the developer to improve speed or stability in non - obvious ways that an automatic procedure would presumably be unable to replicate. back - propagation is therefore not the only way or the optimal way of computing the gradient, but it is a very practical method that continues to serve the deep learning community very well. in the future, [UNK] technology for deep networks may improve as deep learning practitioners become more aware of advances in the broader field of automatic [UNK]. 223
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 238
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 6. deep feedforward networks 6. 5. 10 higher - order derivatives some software frameworks support the use of higher - order derivatives. among the deep learning software frameworks, this includes at least theano and tensorflow. these libraries use the same kind of data structure to describe the expressions for derivatives as they use to describe the original function being [UNK]. this means that the symbolic [UNK] machinery can be applied to derivatives. in the context of deep learning, it is rare to compute a single second derivative of a scalar function. instead, we are usually interested in properties of the hessian matrix. if we have a function f : rn →r, then the hessian matrix is of size n n ×. in typical deep learning applications, n will be the number of parameters in the model, which could easily number in the billions. the entire hessian matrix is thus infeasible to even represent. instead of explicitly computing the hessian, the typical deep learning approach is to use krylov methods. krylov methods are a set of iterative techniques for performing various operations like approximately inverting a matrix or finding approximations to its eigenvectors or eigenvalues, without using any operation other than matrix - vector products
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 239
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
. krylov methods are a set of iterative techniques for performing various operations like approximately inverting a matrix or finding approximations to its eigenvectors or eigenvalues, without using any operation other than matrix - vector products. in order to use krylov methods on the hessian, we only need to be able to compute the product between the hessian matrix h and an arbitrary vector v. a straightforward technique (, ) for doing so is to compute christianson 1992 hv = ∇x ( ∇xf x ( ) ) v. ( 6. 59 ) both of the gradient computations in this expression may be computed automati - cally by the appropriate software library. note that the outer gradient expression takes the gradient of a function of the inner gradient expression. if v is itself a vector produced by a computational graph, it is important to specify that the automatic [UNK] software should not [UNK] through the graph that produced. v while computing the hessian is usually not advisable, it is possible to do with hessian vector products. one simply computes he ( ) i for all i = 1,..., n, where e ( ) i is the one - hot vector with e ( ) i i =
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 239
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
, it is possible to do with hessian vector products. one simply computes he ( ) i for all i = 1,..., n, where e ( ) i is the one - hot vector with e ( ) i i = 1 and all other entries equal to 0. 6. 6 historical notes feedforward networks can be seen as [UNK] nonlinear function approximators based on using gradient descent to minimize the error in a function approximation. 224
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 239
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 6. deep feedforward networks from this point of view, the modern feedforward network is the culmination of centuries of progress on the general function approximation task. the chain rule that underlies the back - propagation algorithm was invented in the 17th century (, ;, ). calculus and algebra have leibniz 1676 l ’ hopital 1696 long been used to solve optimization problems in closed form, but gradient descent was not introduced as a technique for iteratively approximating the solution to optimization problems until the 19th century ( cauchy 1847, ). beginning in the 1940s, these function approximation techniques were used to motivate machine learning models such as the perceptron. however, the earliest models were based on linear models. critics including marvin minsky pointed out several of the flaws of the linear model family, such as its inability to learn the xor function, which led to a backlash against the entire neural network approach. learning nonlinear functions required the development of a multilayer per - ceptron and a means of computing the gradient through such a model. [UNK] applications of the chain rule based on dynamic programming began to appear in the 1960s and 1970s, mostly for control applications (, ; kelley 1960 bryson and denham 1961
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 240
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
per - ceptron and a means of computing the gradient through such a model. [UNK] applications of the chain rule based on dynamic programming began to appear in the 1960s and 1970s, mostly for control applications (, ; kelley 1960 bryson and denham 1961 dreyfus 1962 bryson and ho 1969 dreyfus 1973, ;, ;, ;, ) but also for sensitivity analysis (, ). linnainmaa 1976 werbos 1981 ( ) proposed applying these techniques to training artificial neural networks. the idea was finally developed in practice after being independently rediscovered in [UNK] ways (, ; lecun 1985 parker 1985 rumelhart 1986a, ; et al., ). the book parallel distributed pro - cessing presented the results of some of the first successful experiments with back - propagation in a chapter (, ) that contributed greatly rumelhart et al. 1986b to the popularization of back - propagation and initiated a very active period of research in multi - layer neural networks. however, the ideas put forward by the authors of that book and in particular by rumelhart and hinton go much beyond back - propagation. they include crucial ideas about the possible computational implementation of several central aspects of cognition and learning, which came under the name
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 240
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
the ideas put forward by the authors of that book and in particular by rumelhart and hinton go much beyond back - propagation. they include crucial ideas about the possible computational implementation of several central aspects of cognition and learning, which came under the name of “ connectionism ” because of the importance this school of thought places on the connections between neurons as the locus of learning and memory. in particular, these ideas include the notion of distributed representation ( hinton et al., ). 1986 following the success of back - propagation, neural network research gained pop - ularity and reached a peak in the early 1990s. afterwards, other machine learning techniques became more popular until the modern deep learning renaissance that began in 2006. the core ideas behind modern feedforward networks have not changed sub - stantially since the 1980s. the same back - propagation algorithm and the same 225
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 240
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 6. deep feedforward networks approaches to gradient descent are still in use. most of the improvement in neural network performance from 1986 to 2015 can be attributed to two factors. first, larger datasets have reduced the degree to which statistical generalization is a challenge for neural networks. second, neural networks have become much larger, due to more powerful computers, and better software infrastructure. however, a small number of algorithmic changes have improved the performance of neural networks noticeably. one of these algorithmic changes was the replacement of mean squared error with the cross - entropy family of loss functions. mean squared error was popular in the 1980s and 1990s, but was gradually replaced by cross - entropy losses and the principle of maximum likelihood as ideas spread between the statistics community and the machine learning community. the use of cross - entropy losses greatly improved the performance of models with sigmoid and softmax outputs, which had previously [UNK] from saturation and slow learning when using the mean squared error loss. the other major algorithmic change that has greatly improved the performance of feedforward networks was the replacement of sigmoid hidden units with piecewise linear hidden units, such as rectified linear units. rectification using the max { 0, z } function was introduced in
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 241
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
greatly improved the performance of feedforward networks was the replacement of sigmoid hidden units with piecewise linear hidden units, such as rectified linear units. rectification using the max { 0, z } function was introduced in early neural network models and dates back at least as far as the cognitron and neocognitron ( fukushima 1975 1980,, ). these early models did not use rectified linear units, but instead applied rectification to nonlinear functions. despite the early popularity of rectification, rectification was largely replaced by sigmoids in the 1980s, perhaps because sigmoids perform better when neural networks are very small. as of the early 2000s, rectified linear units were avoided due to a somewhat superstitious belief that activation functions with non - [UNK] points must be avoided. this began to change in about 2009. jarrett 2009 et al. ( ) observed that “ using a rectifying nonlinearity is the single most important factor in improving the performance of a recognition system ” among several [UNK] factors of neural network architecture design. for small datasets, ( ) observed that using rectifying non - jarrett et al. 2009 linearities is even more important than learning
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 241
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
important factor in improving the performance of a recognition system ” among several [UNK] factors of neural network architecture design. for small datasets, ( ) observed that using rectifying non - jarrett et al. 2009 linearities is even more important than learning the weights of the hidden layers. random weights are [UNK] to propagate useful information through a rectified linear network, allowing the classifier layer at the top to learn how to map [UNK] feature vectors to class identities. when more data is available, learning begins to extract enough useful knowledge to exceed the performance of randomly chosen parameters. ( ) glorot et al. 2011a showed that learning is far easier in deep rectified linear networks than in deep networks that have curvature or two - sided saturation in their activation functions. 226
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 241
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 6. deep feedforward networks rectified linear units are also of historical interest because they show that neuroscience has continued to have an influence on the development of deep learning algorithms. ( ) motivate rectified linear units from glorot et al. 2011a biological considerations. the half - rectifying nonlinearity was intended to capture these properties of biological neurons : 1 ) for some inputs, biological neurons are completely inactive. 2 ) for some inputs, a biological neuron ’ s output is proportional to its input. 3 ) most of the time, biological neurons operate in the regime where they are inactive ( i. e., they should have sparse activations ). when the modern resurgence of deep learning began in 2006, feedforward networks continued to have a bad reputation. from about 2006 - 2012, it was widely believed that feedforward networks would not perform well unless they were assisted by other models, such as probabilistic models. today, it is now known that with the right resources and engineering practices, feedforward networks perform very well. today, gradient - based learning in feedforward networks is used as a tool to develop probabilistic models, such as the variational autoencode
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 242
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
known that with the right resources and engineering practices, feedforward networks perform very well. today, gradient - based learning in feedforward networks is used as a tool to develop probabilistic models, such as the variational autoencoder and generative adversarial networks, described in chapter. rather than being viewed as an unreliable 20 technology that must be supported by other techniques, gradient - based learning in feedforward networks has been viewed since 2012 as a powerful technology that may be applied to many other machine learning tasks. in 2006, the community used unsupervised learning to support supervised learning, and now, ironically, it is more common to use supervised learning to support unsupervised learning. feedforward networks continue to have unfulfilled potential. in the future, we expect they will be applied to many more tasks, and that advances in optimization algorithms and model design will improve their performance even further. this chapter has primarily described the neural network family of models. in the subsequent chapters, we turn to how to use these models — how to regularize and train them. 227
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 242
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 7 regularization for deep learning a central problem in machine learning is how to make an algorithm that will perform well not just on the training data, but also on new inputs. many strategies used in machine learning are explicitly designed to reduce the test error, possibly at the expense of increased training error. these strategies are known collectively as regularization. as we will see there are a great many forms of regularization available to the deep learning practitioner. in fact, developing more [UNK] regularization strategies has been one of the major research [UNK] in the field. chapter introduced the basic concepts of generalization, underfitting, overfit - 5 ting, bias, variance and regularization. if you are not already familiar with these notions, please refer to that chapter before continuing with this one. in this chapter, we describe regularization in more detail, focusing on regular - ization strategies for deep models or models that may be used as building blocks to form deep models. some sections of this chapter deal with standard concepts in machine learning. if you are already familiar with these concepts, feel free to skip the relevant sections. however, most of this chapter is concerned with the extension of these basic concepts to the particular case of neural networks. in section, we defined regular
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 243
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
learning. if you are already familiar with these concepts, feel free to skip the relevant sections. however, most of this chapter is concerned with the extension of these basic concepts to the particular case of neural networks. in section, we defined regularization as “ any modification we make to 5. 2. 2 a learning algorithm that is intended to reduce its generalization error but not its training error. ” there are many regularization strategies. some put extra constraints on a machine learning model, such as adding restrictions on the parameter values. some add extra terms in the objective function that can be thought of as corresponding to a soft constraint on the parameter values. if chosen carefully, these extra constraints and penalties can lead to improved performance 228
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 243
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 7. regularization for deep learning on the test set. sometimes these constraints and penalties are designed to encode specific kinds of prior knowledge. other times, these constraints and penalties are designed to express a generic preference for a simpler model class in order to promote generalization. sometimes penalties and constraints are necessary to make an underdetermined problem determined. other forms of regularization, known as ensemble methods, combine multiple hypotheses that explain the training data. in the context of deep learning, most regularization strategies are based on regularizing estimators. regularization of an estimator works by trading increased bias for reduced variance. an [UNK] regularizer is one that makes a profitable trade, reducing variance significantly while not overly increasing the bias. when we discussed generalization and overfitting in chapter, we focused on three situations, 5 where the model family being trained either ( 1 ) excluded the true data generating process — corresponding to underfitting and inducing bias, or ( 2 ) matched the true data generating process, or ( 3 ) included the generating process but also many other possible generating processes — the overfitting regime where variance rather than bias dominates the estimation error. the goal of regularization is to take
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 244
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
or ( 2 ) matched the true data generating process, or ( 3 ) included the generating process but also many other possible generating processes — the overfitting regime where variance rather than bias dominates the estimation error. the goal of regularization is to take a model from the third regime into the second regime. in practice, an overly complex model family does not necessarily include the target function or the true data generating process, or even a close approximation of either. we almost never have access to the true data generating process so we can never know for sure if the model family being estimated includes the generating process or not. however, most applications of deep learning algorithms are to domains where the true data generating process is almost certainly outside the model family. deep learning algorithms are typically applied to extremely complicated domains such as images, audio sequences and text, for which the true generation process essentially involves simulating the entire universe. to some extent, we are always trying to fit a square peg ( the data generating process ) into a round hole ( our model family ). what this means is that controlling the complexity of the model is not a simple matter of finding the model of the right size, with the right number of parameters. instead, we might find — and indeed in practical deep learning
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 244
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
model family ). what this means is that controlling the complexity of the model is not a simple matter of finding the model of the right size, with the right number of parameters. instead, we might find — and indeed in practical deep learning scenarios, we almost always do find — that the best fitting model ( in the sense of minimizing generalization error ) is a large model that has been regularized appropriately. we now review several strategies for how to create such a large, deep, regularized model. 229
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 244
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 7. regularization for deep learning 7. 1 parameter norm penalties regularization has been used for decades prior to the advent of deep learning. linear models such as linear regression and logistic regression allow simple, straightforward, and [UNK] regularization strategies. many regularization approaches are based on limiting the capacity of models, such as neural networks, linear regression, or logistic regression, by adding a pa - rameter norm penalty ω ( θ ) to the objective function j. we denote the regularized objective function by [UNK] : [UNK] j, j, α ( ; θ x y ) = ( ; θ x y ) + ω ( ) θ ( 7. 1 ) where α ∈ [ 0, ∞ ) is a hyperparameter that weights the relative contribution of the norm penalty term,, relative to the standard objective function ω j. setting α to 0 results in no regularization. larger values of α correspond to more regularization. when our training algorithm minimizes the regularized objective function [UNK] it will decrease both the original objective j on the training data and some measure of the size of the parameters θ ( or some subset of the parameters ). [UNK] choices for the parameter norm can result in [UNK] solutions being preferred. ω in this section, we discuss the [UNK] of the
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 245
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
objective j on the training data and some measure of the size of the parameters θ ( or some subset of the parameters ). [UNK] choices for the parameter norm can result in [UNK] solutions being preferred. ω in this section, we discuss the [UNK] of the various norms when used as penalties on the model parameters. before delving into the regularization behavior of [UNK] norms, we note that for neural networks, we typically choose to use a parameter norm penalty that ω penalizes of the [UNK] transformation at each layer and leaves only the weights the biases unregularized. the biases typically require less data to fit accurately than the weights. each weight specifies how two variables interact. fitting the weight well requires observing both variables in a variety of conditions. each bias controls only a single variable. this means that we do not induce too much variance by leaving the biases unregularized. also, regularizing the bias parameters can introduce a significant amount of underfitting. we therefore use the vector w to indicate all of the weights that should be [UNK] by a norm penalty, while the vector θ denotes all of the parameters, including both w and the unregularized parameters. in the context of neural networks, it is
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 245
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
use the vector w to indicate all of the weights that should be [UNK] by a norm penalty, while the vector θ denotes all of the parameters, including both w and the unregularized parameters. in the context of neural networks, it is sometimes desirable to use a separate penalty with a [UNK] α [UNK] for each layer of the network. because it can be expensive to search for the correct value of multiple hyperparameters, it is still reasonable to use the same weight decay at all layers just to reduce the size of search space. 230
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 245
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 7. regularization for deep learning 7. 1. 1 l2 parameter regularization we have already seen, in section, one of the simplest and most common kinds 5. 2. 2 of parameter norm penalty : the l2 parameter norm penalty commonly known as weight decay. this regularization strategy drives the weights closer to the origin1 by adding a regularization term ω ( θ ) = 1 2 w 2 2 to the objective function. in other academic communities, l2 regularization is also known as ridge regression or tikhonov regularization. we can gain some insight into the behavior of weight decay regularization by studying the gradient of the regularized objective function. to simplify the presentation, we assume no bias parameter, so θ is just w. such a model has the following total objective function : [UNK], ( ; w x y ) = α 2 ww w x y + ( j ;, ), ( 7. 2 ) with the corresponding parameter gradient ∇w [UNK], α ( ; w x y ) = w + ∇wj,. ( ; w x y ) ( 7. 3 ) to take a single gradient step to update the weights, we perform this update : w w w ← −α ( + ∇wj,.
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 246
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
y ) = w + ∇wj,. ( ; w x y ) ( 7. 3 ) to take a single gradient step to update the weights, we perform this update : w w w ← −α ( + ∇wj,. ( ; w x y ) ) ( 7. 4 ) written another way, the update is : w w ← − ( 1 α ) −∇ wj,. ( ; w x y ) ( 7. 5 ) we can see that the addition of the weight decay term has modified the learning rule to multiplicatively shrink the weight vector by a constant factor on each step, just before performing the usual gradient update. this describes what happens in a single step. but what happens over the entire course of training? we will further simplify the analysis by making a quadratic approximation to the objective function in the neighborhood of the value of the weights that obtains minimal unregularized training cost, w∗ = arg minw j ( w ). if the objective function is truly quadratic, as in the case of fitting a linear regression model with 1more generally, we could regularize the parameters to be near any specific point in space and, surprisingly, still get
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 246
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
). if the objective function is truly quadratic, as in the case of fitting a linear regression model with 1more generally, we could regularize the parameters to be near any specific point in space and, surprisingly, still get a regularization [UNK], but better results will be obtained for a value closer to the true one, with zero being a default value that makes sense when we do not know if the correct value should be positive or negative. since it is far more common to regularize the model parameters towards zero, we will focus on this special case in our exposition. 231
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 246
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 7. regularization for deep learning mean squared error, then the approximation is perfect. the approximation [UNK] is given by [UNK] j ( ) = θ ( w∗ ) + 1 2 ( w w − ∗ ) h w w ( − ∗ ), ( 7. 6 ) where h is the hessian matrix of j with respect to w evaluated at w∗. there is no first - order term in this quadratic approximation, because w∗is defined to be a minimum, where the gradient vanishes. likewise, because w∗is the location of a minimum of, we can conclude that is positive semidefinite. j h the minimum of [UNK] occurs where its gradient ∇w [UNK] ( ) = ( w h w w − ∗ ) ( 7. 7 ) is equal to. 0 to study the [UNK] of weight decay, we modify equation by adding the 7. 7 weight decay gradient. we can now solve for the minimum of the regularized version of [UNK]. we use the variable [UNK] to represent the location of the minimum. α [UNK] h + ( [UNK] w − ∗ ) = 0 ( 7. 8 ) ( + ) h αi [UNK] hw = ∗ ( 7. 9 ) [UNK] h i = ( + α ) −
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 247
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
to represent the location of the minimum. α [UNK] h + ( [UNK] w − ∗ ) = 0 ( 7. 8 ) ( + ) h αi [UNK] hw = ∗ ( 7. 9 ) [UNK] h i = ( + α ) −1hw∗. ( 7. 10 ) as α approaches 0, the regularized solution [UNK] approaches w∗. but what happens as α grows? because h is real and symmetric, we can decompose it into a diagonal matrix λ and an orthonormal basis of eigenvectors, q, such that h q q = λ. applying the decomposition to equation, we obtain : 7. 10 [UNK] q q = ( λ + ) αi −1q q λ w∗ ( 7. 11 ) = q i q ( + λ α ) −1 q q λ w∗ ( 7. 12 ) = ( + ) q λ αi −1λqw∗. ( 7. 13 ) we see that the [UNK] of weight decay is to rescale w ∗along the axes defined by the eigenvectors of h. specifically, the component of w ∗that is aligned with the i - th eigenvector of h is
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 247
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
is to rescale w ∗along the axes defined by the eigenvectors of h. specifically, the component of w ∗that is aligned with the i - th eigenvector of h is rescaled by a factor of λi λi + α. ( you may wish to review how this kind of scaling works, first explained in figure ). 2. 3 along the directions where the eigenvalues of h are relatively large, for example, where λi α, the [UNK] of regularization is relatively small. however, components with λi α will be shrunk to have nearly zero magnitude. this [UNK] is illustrated in figure. 7. 1 232
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 247
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 7. regularization for deep learning w1 w2 w∗ [UNK] w figure 7. 1 : an illustration of the [UNK] ofl2 ( or weight decay ) regularization on the value of the optimal w. the solid ellipses represent contours of equal value of the unregularized objective. the dotted circles represent contours of equal value of thel2 regularizer. at the point [UNK] w, these competing objectives reach an equilibrium. in the first dimension, the eigenvalue of the hessian of j is small. the objective function does not increase much when moving horizontally away from w∗. because the objective function does not express a strong preference along this direction, the regularizer has a strong [UNK] on this axis. the regularizer pulls w 1 close to zero. in the second dimension, the objective function is very sensitive to movements away from w∗. the corresponding eigenvalue is large, indicating high curvature. as a result, weight decay [UNK] the position ofw2 relatively little. only directions along which the parameters contribute significantly to reducing the objective function are preserved relatively intact. in directions that do not contribute to reducing the objective function, a small eigenvalue of the hessian tells
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 248
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##w2 relatively little. only directions along which the parameters contribute significantly to reducing the objective function are preserved relatively intact. in directions that do not contribute to reducing the objective function, a small eigenvalue of the hessian tells us that movement in this direction will not significantly increase the gradient. components of the weight vector corresponding to such unimportant directions are decayed away through the use of the regularization throughout training. so far we have discussed weight decay in terms of its [UNK] on the optimization of an abstract, general, quadratic cost function. how do these [UNK] relate to machine learning in particular? we can find out by studying linear regression, a model for which the true cost function is quadratic and therefore amenable to the same kind of analysis we have used so far. applying the analysis again, we will be able to obtain a special case of the same results, but with the solution now phrased in terms of the training data. for linear regression, the cost function is 233
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 248
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 7. regularization for deep learning the sum of squared errors : ( ) xw y − ( ) xw y −. ( 7. 14 ) when we add l2 regularization, the objective function changes to ( ) xw y − ( ) + xw y − 1 2αww. ( 7. 15 ) this changes the normal equations for the solution from w x = ( x ) −1x y ( 7. 16 ) to w x = ( x i + α ) −1xy. ( 7. 17 ) the matrix xx in equation is proportional to the covariance matrix 7. 16 1 mxx. using l2 regularization replaces this matrix with xx i + α −1 in equation. 7. 17 the new matrix is the same as the original one, but with the addition of α to the diagonal. the diagonal entries of this matrix correspond to the variance of each input feature. we can see that l2 regularization causes the learning algorithm to “ perceive ” the input x as having higher variance, which makes it shrink the weights on features whose covariance with the output target is low compared to this added variance. 7. 1. 2 l1 regularization while l2 weight decay is the most common form
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 249
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
input x as having higher variance, which makes it shrink the weights on features whose covariance with the output target is low compared to this added variance. 7. 1. 2 l1 regularization while l2 weight decay is the most common form of weight decay, there are other ways to penalize the size of the model parameters. another option is to use l1 regularization. formally, l1 regularization on the model parameter is defined as : w ω ( ) = θ | | | | w 1 = i | wi |, ( 7. 18 ) that is, as the sum of absolute values of the individual parameters. 2 we will now discuss the [UNK] of l1 regularization on the simple linear regression model, with no bias parameter, that we studied in our analysis of l2 regularization. in particular, we are interested in delineating the [UNK] between l1 and l2 forms 2as with l2 regularization, we could regularize the parameters towards a value that is not zero, but instead towards some parameter value w ( ) o. in that case the l1 regularization would introduce the term ω ( ) = θ | | − w w ( ) o | | 1 = i | wi −w ( ) o
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 249
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
zero, but instead towards some parameter value w ( ) o. in that case the l1 regularization would introduce the term ω ( ) = θ | | − w w ( ) o | | 1 = i | wi −w ( ) o i |. 234
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 249
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 7. regularization for deep learning of regularization. as with l2 weight decay, l1 weight decay controls the strength of the regularization by scaling the penalty using a positive hyperparameter ω α. thus, the regularized objective function [UNK], ( ; w x y ) is given by [UNK], α ( ; w x y ) = | | | | w 1 + ( ; ) j w x y,, ( 7. 19 ) with the corresponding gradient ( actually, sub - gradient ) : ∇w [UNK], α ( ; w x y ) = sign ( ) + w ∇wj, ( x y w ; ) ( 7. 20 ) where is simply the sign of applied element - wise. sign ( ) w w by inspecting equation, we can see immediately that the [UNK] of 7. 20 l1 regularization is quite [UNK] from that of l2 regularization. specifically, we can see that the regularization contribution to the gradient no longer scales linearly with each wi ; instead it is a constant factor with a sign equal to sign ( wi ). one consequence of this form of the gradient is that we will not necessarily see clean algebraic solutions to quadratic approximations of j ( x y, ; w )
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 250
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
wi ; instead it is a constant factor with a sign equal to sign ( wi ). one consequence of this form of the gradient is that we will not necessarily see clean algebraic solutions to quadratic approximations of j ( x y, ; w ) as we did for l2 regularization. our simple linear model has a quadratic cost function that we can represent via its taylor series. alternately, we could imagine that this is a truncated taylor series approximating the cost function of a more sophisticated model. the gradient in this setting is given by ∇w [UNK] ( ) = ( w h w w − ∗ ), ( 7. 21 ) where, again, is the hessian matrix of with respect to evaluated at h j w w∗. because the l1 penalty does not admit clean algebraic expressions in the case of a fully general hessian, we will also make the further simplifying assumption that the hessian is diagonal, h = diag ( [ h1 1,,..., hn, n ] ), where each hi, i > 0. this assumption holds if the data for the linear regression problem has been preprocessed to remove all correlation between the input features, which may be accomplished using pca. our quadratic approximation
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 250
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
n ] ), where each hi, i > 0. this assumption holds if the data for the linear regression problem has been preprocessed to remove all correlation between the input features, which may be accomplished using pca. our quadratic approximation of the l1 regularized objective function decom - poses into a sum over the parameters : [UNK], j ( ; w x y ) = ( w∗ ; ) + x y, i 1 2h i, i ( wi −w∗ i ) 2 + α w | i |. ( 7. 22 ) the problem of minimizing this approximate cost function has an analytical solution ( for each dimension ), with the following form : i wi = sign ( w∗ i ) max | w∗ i | − α hi, i, 0. ( 7. 23 ) 235
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 250
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 7. regularization for deep learning consider the situation where w∗ i > i 0 for all. there are two possible outcomes : 1. the case where w∗ i ≤ α hi, i. here the optimal value of wi under the regularized objective is simply wi = 0. this occurs because the contribution of j ( w ; x y, ) to the regularized objective [UNK] ( w ; x y, ) is overwhelmed — in direction i — by the l1 regularization which pushes the value of wi to zero. 2. the case where w∗ i > α hi, i. in this case, the regularization does not move the optimal value of wi to zero but instead it just shifts it in that direction by a distance equal to α hi, i. a similar process happens when w∗ i < 0, but with the l1 penalty making wi less negative by α hi, i, or 0. in comparison to l2 regularization, l1 regularization results in a solution that is more sparse. sparsity in this context refers to the fact that some parameters have an optimal value of zero. the sparsity of l1 regularization is a qualitatively [UNK] behavior than arises with l2 regularization. equation gave the
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 251
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
sparse. sparsity in this context refers to the fact that some parameters have an optimal value of zero. the sparsity of l1 regularization is a qualitatively [UNK] behavior than arises with l2 regularization. equation gave the 7. 13 solution [UNK] for l2 regularization. if we revisit that equation using the assumption of a diagonal and positive definite hessian h that we introduced for our analysis of l1 regularization, we find that [UNK] = hi, i hi, i + αw∗ i. if w∗ i was nonzero, then [UNK] remains nonzero. this demonstrates that l2 regularization does not cause the parameters to become sparse, while l1 regularization may do so for large enough. α the sparsity property induced by l1 regularization has been used extensively as a feature selection mechanism. feature selection simplifies a machine learning problem by choosing which subset of the available features should be used. in particular, the well known lasso (, ) ( least absolute shrinkage and tibshirani 1995 selection operator ) model integrates an l1 penalty with a linear model and a least squares cost function. the l1 penalty causes a subset of the weights to
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 251
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
well known lasso (, ) ( least absolute shrinkage and tibshirani 1995 selection operator ) model integrates an l1 penalty with a linear model and a least squares cost function. the l1 penalty causes a subset of the weights to become zero, suggesting that the corresponding features may safely be discarded. in section, we saw that many regularization strategies can be interpreted 5. 6. 1 as map bayesian inference, and that in particular, l2 regularization is equivalent to map bayesian inference with a gaussian prior on the weights. for l1 regu - larization, the penalty αω ( w ) = α i | wi | used to regularize a cost function is equivalent to the log - prior term that is maximized by map bayesian inference when the prior is an isotropic laplace distribution ( equation ) over 3. 26 w ∈rn : log ( ) = p w i log laplace ( wi ; 0, 1 α ) = − | | | | α w 1 + log log 2 n α n −. ( 7. 24 ) 236
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 251
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 7. regularization for deep learning from the point of view of learning via maximization with respect to w, we can ignore the terms because they do not depend on. log log 2 α − w 7. 2 norm penalties as constrained optimization consider the cost function regularized by a parameter norm penalty : [UNK], j, α. ( ; θ x y ) = ( ; θ x y ) + ω ( ) θ ( 7. 25 ) recall from section that we can minimize a function subject to constraints 4. 4 by constructing a generalized lagrange function, consisting of the original objective function plus a set of penalties. each penalty is a product between a [UNK], called a karush – kuhn – tucker ( kkt ) multiplier, and a function representing whether the constraint is satisfied. if we wanted to constrain ω ( θ ) to be less than some constant, we could construct a generalized lagrange function k l − ( ; ) = ( ; ) + ( ω ( ) θ, α x y, j θ x y, α θ k. ) ( 7. 26 ) the solution to the constrained problem is given by θ∗ = arg min θ max α, α≥0l ( ) θ,
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 252
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
( ) θ, α x y, j θ x y, α θ k. ) ( 7. 26 ) the solution to the constrained problem is given by θ∗ = arg min θ max α, α≥0l ( ) θ, α. ( 7. 27 ) as described in section, solving this problem requires modifying both 4. 4 θ and α. section provides a worked example of linear regression with an 4. 5 l2 constraint. many [UNK] procedures are possible — some may use gradient descent, while others may use analytical solutions for where the gradient is zero — but in all procedures α must increase whenever ω ( θ ) > k and decrease whenever ω ( θ ) < k. all positive α encourage ω ( θ ) to shrink. the optimal value α∗will encourage ω ( θ ) to shrink, but not so strongly to make become less than. ω ( ) θ k to gain some insight into the [UNK] of the constraint, we can fix α∗and view the problem as just a function of : θ θ∗ = arg min θ l ( θ, α∗ ) = arg min θ j, α ( ; θ x y ) + ∗ω ( ) θ. ( 7. 28 ) this is exactly the same as
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 252
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
function of : θ θ∗ = arg min θ l ( θ, α∗ ) = arg min θ j, α ( ; θ x y ) + ∗ω ( ) θ. ( 7. 28 ) this is exactly the same as the regularized training problem of minimizing [UNK]. we can thus think of a parameter norm penalty as imposing a constraint on the weights. if is the ω l2 norm, then the weights are constrained to lie in an l2 ball. if is the ω l1 norm, then the weights are constrained to lie in a region of 237
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 252
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 7. regularization for deep learning limited l1 norm. usually we do not know the size of the constraint region that we impose by using weight decay with [UNK] α∗because the value of α∗does not directly tell us the value of k. in principle, one can solve for k, but the relationship between k and α∗depends on the form of j. while we do not know the exact size of the constraint region, we can control it roughly by increasing or decreasing α in order to grow or shrink the constraint region. larger α will result in a smaller constraint region. smaller will result in a larger constraint region. α sometimes we may wish to use explicit constraints rather than penalties. as described in section, we can modify algorithms such as stochastic gradient 4. 4 descent to take a step downhill on j ( θ ) and then project θ back to the nearest point that satisfies ω ( θ ) < k. this can be useful if we have an idea of what value of k is appropriate and do not want to spend time searching for the value of α that corresponds to this. k another reason to use explicit constraints and reprojection rather than enforcing constraints with penalties is that penalties can cause non - convex optimization
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 253
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
value of k is appropriate and do not want to spend time searching for the value of α that corresponds to this. k another reason to use explicit constraints and reprojection rather than enforcing constraints with penalties is that penalties can cause non - convex optimization procedures to get stuck in local minima corresponding to small θ. when training neural networks, this usually manifests as neural networks that train with several “ dead units. ” these are units that do not contribute much to the behavior of the function learned by the network because the weights going into or out of them are all very small. when training with a penalty on the norm of the weights, these configurations can be locally optimal, even if it is possible to significantly reduce j by making the weights larger. explicit constraints implemented by re - projection can work much better in these cases because they do not encourage the weights to approach the origin. explicit constraints implemented by re - projection only have an [UNK] when the weights become large and attempt to leave the constraint region. finally, explicit constraints with reprojection can be useful because they impose some stability on the optimization procedure. when using high learning rates, it is possible to encounter a positive feedback loop in which large weights induce large gradients which then induce a
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 253
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
. finally, explicit constraints with reprojection can be useful because they impose some stability on the optimization procedure. when using high learning rates, it is possible to encounter a positive feedback loop in which large weights induce large gradients which then induce a large update to the weights. if these updates consistently increase the size of the weights, then θ rapidly moves away from the origin until numerical overflow occurs. explicit constraints with reprojection prevent this feedback loop from continuing to increase the magnitude of the weights without bound. ( ) recommend using constraints combined with hinton et al. 2012c a high learning rate to allow rapid exploration of parameter space while maintaining some stability. in particular, hinton 2012c et al. ( ) recommend a strategy introduced by srebro and shraibman 2005 ( ) : constraining the norm of each column of the weight matrix 238
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 253
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 7. regularization for deep learning of a neural net layer, rather than constraining the frobenius norm of the entire weight matrix. constraining the norm of each column separately prevents any one hidden unit from having very large weights. if we converted this constraint into a penalty in a lagrange function, it would be similar to l2 weight decay but with a separate kkt multiplier for the weights of each hidden unit. each of these kkt multipliers would be dynamically updated separately to make each hidden unit obey the constraint. in practice, column norm limitation is always implemented as an explicit constraint with reprojection. 7. 3 regularization and under - constrained problems in some cases, regularization is necessary for machine learning problems to be prop - erly defined. many linear models in machine learning, including linear regression and pca, depend on inverting the matrix x x. this is not possible whenever xx is singular. this matrix can be singular whenever the data generating distri - bution truly has no variance in some direction, or when no variance is observed in some direction because there are fewer examples ( rows of x ) than input features ( columns of x ). in this case, many forms of
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 254
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
generating distri - bution truly has no variance in some direction, or when no variance is observed in some direction because there are fewer examples ( rows of x ) than input features ( columns of x ). in this case, many forms of regularization correspond to inverting xx i + α instead. this regularized matrix is guaranteed to be invertible. these linear problems have closed form solutions when the relevant matrix is invertible. it is also possible for a problem with no closed form solution to be underdetermined. an example is logistic regression applied to a problem where the classes are linearly separable. if a weight vector w is able to achieve perfect classification, then 2w will also achieve perfect classification and higher likelihood. an iterative optimization procedure like stochastic gradient descent will continually increase the magnitude of w and, in theory, will never halt. in practice, a numerical implementation of gradient descent will eventually reach [UNK] large weights to cause numerical overflow, at which point its behavior will depend on how the programmer has decided to handle values that are not real numbers. most forms of regularization are able to guarantee the convergence of iterative methods applied to underdetermined problems. for example,
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 254
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##ow, at which point its behavior will depend on how the programmer has decided to handle values that are not real numbers. most forms of regularization are able to guarantee the convergence of iterative methods applied to underdetermined problems. for example, weight decay will cause gradient descent to quit increasing the magnitude of the weights when the slope of the likelihood is equal to the weight decay [UNK]. the idea of using regularization to solve underdetermined problems extends beyond machine learning. the same idea is useful for several basic linear algebra problems. as we saw in section, we can solve underdetermined linear equations using 2. 9 239
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 254
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 7. regularization for deep learning the moore - penrose pseudoinverse. recall that one definition of the pseudoinverse x + of a matrix is x x + = lim α0 ( xx i + α ) −1x. ( 7. 29 ) we can now recognize equation as performing linear regression with weight 7. 29 decay. specifically, equation is the limit of equation as the regularization 7. 29 7. 17 [UNK] shrinks to zero. we can thus interpret the pseudoinverse as stabilizing underdetermined problems using regularization. 7. 4 dataset augmentation the best way to make a machine learning model generalize better is to train it on more data. of course, in practice, the amount of data we have is limited. one way to get around this problem is to create fake data and add it to the training set. for some machine learning tasks, it is reasonably straightforward to create new fake data. this approach is easiest for classification. a classifier needs to take a compli - cated, high dimensional input x and summarize it with a single category identity y. this means that the main task facing a classifier is
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 255
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
classification. a classifier needs to take a compli - cated, high dimensional input x and summarize it with a single category identity y. this means that the main task facing a classifier is to be invariant to a wide variety of transformations. we can generate new ( x, y ) pairs easily just by transforming the inputs in our training set. x this approach is not as readily applicable to many other tasks. for example, it is [UNK] to generate new fake data for a density estimation task unless we have already solved the density estimation problem. dataset augmentation has been a particularly [UNK] technique for a specific classification problem : object recognition. images are high dimensional and include an enormous variety of factors of variation, many of which can be easily simulated. operations like translating the training images a few pixels in each direction can often greatly improve generalization, even if the model has already been designed to be partially translation invariant by using the convolution and pooling techniques described in chapter. many other operations such as rotating the image or scaling 9 the image have also proven quite [UNK]. one must be careful not to apply transformations that would change the correct class. for example, optical character recognition tasks
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 255
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##tion and pooling techniques described in chapter. many other operations such as rotating the image or scaling 9 the image have also proven quite [UNK]. one must be careful not to apply transformations that would change the correct class. for example, optical character recognition tasks require recognizing the [UNK] between ‘ b ’ and ‘ d ’ and the [UNK] between ‘ 6 ’ and ‘ 9 ’, so horizontal flips and [UNK] are not appropriate ways of augmenting datasets for these tasks. 240
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 255
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 7. regularization for deep learning there are also transformations that we would like our classifiers to be invariant to, but which are not easy to perform. for example, out - of - plane rotation can not be implemented as a simple geometric operation on the input pixels. dataset augmentation is [UNK] for speech recognition tasks as well ( jaitly and hinton 2013, ). injecting noise in the input to a neural network ( sietsma and dow 1991, ) can also be seen as a form of data augmentation. for many classification and even some regression tasks, the task should still be possible to solve even if small random noise is added to the input. neural networks prove not to be very robust to noise, however ( tang and eliasmith 2010, ). one way to improve the robustness of neural networks is simply to train them with random noise applied to their inputs. input noise injection is part of some unsupervised learning algorithms such as the denoising autoencoder ( vincent 2008 et al., ). noise injection also works when the noise is applied to the hidden units, which can be seen as doing dataset augmentation at multiple levels of abstraction. poole 2014 et al
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 256
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##oising autoencoder ( vincent 2008 et al., ). noise injection also works when the noise is applied to the hidden units, which can be seen as doing dataset augmentation at multiple levels of abstraction. poole 2014 et al. ( ) recently showed that this approach can be highly [UNK] provided that the magnitude of the noise is carefully tuned. dropout, a powerful regularization strategy that will be described in section, can be seen as a process of constructing new inputs by 7. 12 multiplying by noise. when comparing machine learning benchmark results, it is important to take the [UNK] of dataset augmentation into account. often, hand - designed dataset augmentation schemes can dramatically reduce the generalization error of a machine learning technique. to compare the performance of one machine learning algorithm to another, it is necessary to perform controlled experiments. when comparing machine learning algorithm a and machine learning algorithm b, it is necessary to make sure that both algorithms were evaluated using the same hand - designed dataset augmentation schemes. suppose that algorithm a performs poorly with no dataset augmentation and algorithm b performs well when combined with numerous synthetic transformations of the input. in such a case it is likely the synthetic transformations caused the improved performance, rather
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 256
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##set augmentation schemes. suppose that algorithm a performs poorly with no dataset augmentation and algorithm b performs well when combined with numerous synthetic transformations of the input. in such a case it is likely the synthetic transformations caused the improved performance, rather than the use of machine learning algorithm b. sometimes deciding whether an experiment has been properly controlled requires subjective judgment. for example, machine learning algorithms that inject noise into the input are performing a form of dataset augmentation. usually, operations that are generally applicable ( such as adding gaussian noise to the input ) are considered part of the machine learning algorithm, while operations that are specific to one application domain ( such as randomly cropping an image ) are considered to be separate pre - processing steps. 241
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 256
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 7. regularization for deep learning 7. 5 noise robustness section has motivated the use of noise applied to the inputs as a dataset 7. 4 augmentation strategy. for some models, the addition of noise with infinitesimal variance at the input of the model is equivalent to imposing a penalty on the norm of the weights (,, ). in the general case, it is important to bishop 1995a b remember that noise injection can be much more powerful than simply shrinking the parameters, especially when the noise is added to the hidden units. noise applied to the hidden units is such an important topic that it merit its own separate discussion ; the dropout algorithm described in section is the main development 7. 12 of that approach. another way that noise has been used in the service of regularizing models is by adding it to the weights. this technique has been used primarily in the context of recurrent neural networks (, ; jim et al. 1996 graves 2011, ). this can be interpreted as a stochastic implementation of bayesian inference over the weights. the bayesian treatment of learning would consider the model weights to be uncertain and representable via a probability distribution that reflects this uncertainty. adding noise to the weights is a practical
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 257
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
a stochastic implementation of bayesian inference over the weights. the bayesian treatment of learning would consider the model weights to be uncertain and representable via a probability distribution that reflects this uncertainty. adding noise to the weights is a practical, stochastic way to reflect this uncertainty. noise applied to the weights can also be interpreted as equivalent ( under some assumptions ) to a more traditional form of regularization, encouraging stability of the function to be learned. consider the regression setting, where we wish to train a function [UNK] ( x ) that maps a set of features x to a scalar using the least - squares cost function between the model predictions [UNK] ( ) x and the true values : y j = ep x, y ( ) ( [UNK] y ( ) x − ) 2. ( 7. 30 ) the training set consists of labeled examples m { ( x ( 1 ), y ( 1 ) ) (,..., x ( ) m, y ( ) m ) }. we now assume that with each input presentation we also include a random perturbation w [UNK] ( ; 0, ηi ) of the network weights. let us imagine that we have a standard l - layer mlp. we denote the perturbed model
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 257
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
now assume that with each input presentation we also include a random perturbation w [UNK] ( ; 0, ηi ) of the network weights. let us imagine that we have a standard l - layer mlp. we denote the perturbed model as [UNK] ( x ). despite the injection of noise, we are still interested in minimizing the squared error of the output of the network. the objective function thus becomes : [UNK] = ep, y, ( x w ) ( [UNK] ( ) ) x −y 2 ( 7. 31 ) = ep, y, ( x w ) [UNK] w ( ) 2 [UNK] x −yyw ( ) + x y2. ( 7. 32 ) for small η, the minimization of j with added weight noise ( with covariance ηi ) is equivalent to minimization of j with an additional regularization term : 242
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 257
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 7. regularization for deep learning ηep, y ( x ) ∇w [UNK] ( ) x 2. this form of regularization encourages the parameters to go to regions of parameter space where small perturbations of the weights have a relatively small influence on the output. in other words, it pushes the model into regions where the model is relatively insensitive to small variations in the weights, finding points that are not merely minima, but minima surrounded by flat regions ( hochreiter and schmidhuber 1995, ). in the simplified case of linear regression ( where, for instance, [UNK] ( x ) = wx + b ), this regularization term collapses into ηep ( ) x x 2, which is not a function of parameters and therefore does not contribute to the gradient of [UNK] with respect to the model parameters. 7. 5. 1 injecting noise at the output targets most datasets have some amount of mistakes in the y labels. it can be harmful to maximize log p ( y | x ) when y is a mistake. one way to prevent this is to explicitly model the noise on the labels. for example, we can assume that for some small constant, the training
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 258
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
labels. it can be harmful to maximize log p ( y | x ) when y is a mistake. one way to prevent this is to explicitly model the noise on the labels. for example, we can assume that for some small constant, the training set label y is correct with probability 1−, and otherwise any of the other possible labels might be correct. this assumption is easy to incorporate into the cost function analytically, rather than by explicitly drawing noise samples. for example, label smoothing regularizes a model based on a softmax with k output values by replacing the hard and classification targets 0 1 with targets of k−1 and 1−, respectively. the standard cross - entropy loss may then be used with these soft targets. maximum likelihood learning with a softmax classifier and hard targets may actually never converge — the softmax can never predict a probability of exactly or exactly, so it will continue to learn larger 0 1 and larger weights, making more extreme predictions forever. it is possible to prevent this scenario using other regularization strategies like weight decay. label smoothing has the advantage of preventing the pursuit of hard probabilities without discouraging correct classification. this strategy has been used since the 1980s and continues to be featured prominently in modern
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 258
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
scenario using other regularization strategies like weight decay. label smoothing has the advantage of preventing the pursuit of hard probabilities without discouraging correct classification. this strategy has been used since the 1980s and continues to be featured prominently in modern neural networks ( szegedy et al., ). 2015 7. 6 semi - supervised learning in the paradigm of semi - supervised learning, both unlabeled examples from p ( x ) and labeled examples from p ( x y, ) are used to estimate p ( y x | ) or predict y from x. in the context of deep learning, semi - supervised learning usually refers to learning a representation h = f ( x ). the goal is to learn a representation so 243
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 258
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 7. regularization for deep learning that examples from the same class have similar representations. unsupervised learning can provide useful cues for how to group examples in representation space. examples that cluster tightly in the input space should be mapped to similar representations. a linear classifier in the new space may achieve better generalization in many cases ( belkin and niyogi 2002 chapelle 2003, ; et al., ). a long - standing variant of this approach is the application of principal components analysis as a pre - processing step before applying a classifier ( on the projected data ). instead of having separate unsupervised and supervised components in the model, one can construct models in which a generative model of either p ( x ) or p ( x y, ) shares parameters with a discriminative model of p ( y x | ). one can then trade - [UNK] supervised criterion −log p ( y x | ) with the unsupervised or generative one ( such as −log p ( x ) or −log p ( x y, ) ). the generative criterion then expresses a particular form of prior belief about the solution to the supervised learning problem (, ), namely that the structure of
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 259
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
one ( such as −log p ( x ) or −log p ( x y, ) ). the generative criterion then expresses a particular form of prior belief about the solution to the supervised learning problem (, ), namely that the structure of lasserre et al. 2006 p ( x ) is connected to the structure of p ( y x | ) in a way that is captured by the shared parametrization. by controlling how much of the generative criterion is included in the total criterion, one can find a better trade - [UNK] with a purely generative or a purely discriminative training criterion (, ; lasserre et al. 2006 larochelle and bengio 2008, ). salakhutdinov and hinton 2008 ( ) describe a method for learning the kernel function of a kernel machine used for regression, in which the usage of unlabeled examples for modeling improves quite significantly. p ( ) x p ( ) y x | see ( ) for more information about semi - supervised learning. chapelle et al. 2006 7. 7 multi - task learning multi - task learning (, ) is a way to improve generalization by pooling caruana 1993 the examples ( which can be seen as soft
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 259
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
information about semi - supervised learning. chapelle et al. 2006 7. 7 multi - task learning multi - task learning (, ) is a way to improve generalization by pooling caruana 1993 the examples ( which can be seen as soft constraints imposed on the parameters ) arising out of several tasks. in the same way that additional training examples put more pressure on the parameters of the model towards values that generalize well, when part of a model is shared across tasks, that part of the model is more constrained towards good values ( assuming the sharing is justified ), often yielding better generalization. figure illustrates a very common form of multi - task learning, in which 7. 2 [UNK] supervised tasks ( predicting y ( ) i given x ) share the same input x, as well as some intermediate - level representation h ( shared ) capturing a common pool of 244
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 259
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 7. regularization for deep learning factors. the model can generally be divided into two kinds of parts and associated parameters : 1. task - specific parameters ( which only benefit from the examples of their task to achieve good generalization ). these are the upper layers of the neural network in figure. 7. 2 2. generic parameters, shared across all the tasks ( which benefit from the pooled data of all the tasks ). these are the lower layers of the neural network in figure. 7. 2 h ( 1 ) h ( 1 ) h ( 2 ) h ( 2 ) h ( 3 ) h ( 3 ) y ( 1 ) y ( 1 ) y ( 2 ) y ( 2 ) h ( shared ) h ( shared ) x figure 7. 2 : multi - task learning can be cast in several ways in deep learning frameworks and this figure illustrates the common situation where the tasks share a common input but involve [UNK] target random variables. the lower layers of a deep network ( whether it is supervised and feedforward or includes a generative component with downward arrows ) can be shared across such tasks, while task - specific parameters ( associated respectively with the weights into and
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 260
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
. the lower layers of a deep network ( whether it is supervised and feedforward or includes a generative component with downward arrows ) can be shared across such tasks, while task - specific parameters ( associated respectively with the weights into and from h ( 1 ) and h ( 2 ) ) can be learned on top of those yielding a shared representation h ( shared ). the underlying assumption is that there exists a common pool of factors that explain the variations in the inputx, while each task is associated with a subset of these factors. in this example, it is additionally assumed that top - level hidden units h ( 1 ) and h ( 2 ) are specialized to each task ( respectively predicting y ( 1 ) and y ( 2 ) ) while some intermediate - level representationh ( shared ) is shared across all tasks. in the unsupervised learning context, it makes sense for some of the top - level factors to be associated with none of the output tasks ( h ( 3 ) ) : these are the factors that explain some of the input variations but are not relevant for predicting y ( 1 ) or y ( 2 ). improved generalization and generalization error bounds (, ) can be baxter 1995 achieved because of the shared parameters, for which statistical
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 260
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
the factors that explain some of the input variations but are not relevant for predicting y ( 1 ) or y ( 2 ). improved generalization and generalization error bounds (, ) can be baxter 1995 achieved because of the shared parameters, for which statistical strength can be 245
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 260
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 7. regularization for deep learning 0 50 100 150 200 250 time ( epochs ) 0 00. 0 05. 0 10. 0 15. 0 20. loss ( negative log - likelihood ) training set loss validation set loss figure 7. 3 : learning curves showing how the negative log - likelihood loss changes over time ( indicated as number of training iterations over the dataset, or epochs ). in this example, we train a maxout network on mnist. observe that the training objective decreases consistently over time, but the validation set average loss eventually begins to increase again, forming an asymmetric u - shaped curve. greatly improved ( in proportion with the increased number of examples for the shared parameters, compared to the scenario of single - task models ). of course this will happen only if some assumptions about the statistical relationship between the [UNK] tasks are valid, meaning that there is something shared across some of the tasks. from the point of view of deep learning, the underlying prior belief is the following : among the factors that explain the variations observed in the data associated with the [UNK] tasks, some are shared across two or more tasks. 7. 8 early stopping when training large models with [UNK] representational capacity to overfit the task, we often observe that
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 261
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
the factors that explain the variations observed in the data associated with the [UNK] tasks, some are shared across two or more tasks. 7. 8 early stopping when training large models with [UNK] representational capacity to overfit the task, we often observe that training error decreases steadily over time, but validation set error begins to rise again. see figure for an example of this 7. 3 behavior. this behavior occurs very reliably. this means we can obtain a model with better validation set error ( and thus, hopefully better test set error ) by returning to the parameter setting at the point in time with the lowest validation set error. every time the error on the validation set improves, we store a copy of the model parameters. when the training algorithm terminates, we return these parameters, rather than the latest parameters. the 246
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 261
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 7. regularization for deep learning algorithm terminates when no parameters have improved over the best recorded validation error for some pre - specified number of iterations. this procedure is specified more formally in algorithm. 7. 1 algorithm 7. 1 the early stopping meta - algorithm for determining the best amount of time to train. this meta - algorithm is a general strategy that works well with a variety of training algorithms and ways of quantifying error on the validation set. let be the number of steps between evaluations. n let p be the “ patience, ” the number of times to observe worsening validation set error before giving up. let θo be the initial parameters. θ θ ← o i ←0 j ←0 v ←∞ θ∗←θ i∗←i while do j < p update by running the training algorithm for steps. θ n i i n ← + v←validationseterror ( ) θ if v < v then j ←0 θ∗←θ i∗←i v v ← else j j ← + 1 end if end while best parameters are θ∗, best number of training steps is i∗ this strategy is known as early stopping. it is probably the most commonly used form of regularization in deep
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 262
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##←i v v ← else j j ← + 1 end if end while best parameters are θ∗, best number of training steps is i∗ this strategy is known as early stopping. it is probably the most commonly used form of regularization in deep learning. its popularity is due both to its [UNK] and its simplicity. one way to think of early stopping is as a very [UNK] hyperparameter selection algorithm. in this view, the number of training steps is just another hyperparameter. we can see in figure that this hyperparameter has a u - shaped validation set 7. 3 247
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 262
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 7. regularization for deep learning performance curve. most hyperparameters that control model capacity have such a u - shaped validation set performance curve, as illustrated in figure. in the case of 5. 3 early stopping, we are controlling the [UNK] capacity of the model by determining how many steps it can take to fit the training set. most hyperparameters must be chosen using an expensive guess and check process, where we set a hyperparameter at the start of training, then run training for several steps to see its [UNK]. the “ training time ” hyperparameter is unique in that by definition a single run of training tries out many values of the hyperparameter. the only significant cost to choosing this hyperparameter automatically via early stopping is running the validation set evaluation periodically during training. ideally, this is done in parallel to the training process on a separate machine, separate cpu, or separate gpu from the main training process. if such resources are not available, then the cost of these periodic evaluations may be reduced by using a validation set that is small compared to the training set or by evaluating the validation set error less frequently and obtaining a lower resolution estimate of the optimal training time. an additional cost
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 263
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
available, then the cost of these periodic evaluations may be reduced by using a validation set that is small compared to the training set or by evaluating the validation set error less frequently and obtaining a lower resolution estimate of the optimal training time. an additional cost to early stopping is the need to maintain a copy of the best parameters. this cost is generally negligible, because it is acceptable to store these parameters in a slower and larger form of memory ( for example, training in gpu memory, but storing the optimal parameters in host memory or on a disk drive ). since the best parameters are written to infrequently and never read during training, these occasional slow writes have little [UNK] on the total training time. early stopping is a very unobtrusive form of regularization, in that it requires almost no change in the underlying training procedure, the objective function, or the set of allowable parameter values. this means that it is easy to use early stopping without damaging the learning dynamics. this is in contrast to weight decay, where one must be careful not to use too much weight decay and trap the network in a bad local minimum corresponding to a solution with pathologically small weights. early stopping may be used either alone or in conjunction with other regulariza -
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 263
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
weight decay, where one must be careful not to use too much weight decay and trap the network in a bad local minimum corresponding to a solution with pathologically small weights. early stopping may be used either alone or in conjunction with other regulariza - tion strategies. even when using regularization strategies that modify the objective function to encourage better generalization, it is rare for the best generalization to occur at a local minimum of the training objective. early stopping requires a validation set, which means some training data is not fed to the model. to best exploit this extra data, one can perform extra training after the initial training with early stopping has completed. in the second, extra training step, all of the training data is included. there are two basic strategies one can use for this second training procedure. one strategy ( algorithm ) is to initialize the model again and retrain on all 7. 2 248
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 263
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 7. regularization for deep learning of the data. in this second training pass, we train for the same number of steps as the early stopping procedure determined was optimal in the first pass. there are some subtleties associated with this procedure. for example, there is not a good way of knowing whether to retrain for the same number of parameter updates or the same number of passes through the dataset. on the second round of training, each pass through the dataset will require more parameter updates because the training set is bigger. algorithm 7. 2 a meta - algorithm for using early stopping to determine how long to train, then retraining on all the data. let x ( ) train and y ( ) train be the training set. split x ( ) train and y ( ) train into ( x ( ) subtrain, x ( valid ) ) ( and y ( ) subtrain, y ( valid ) ) respectively. run early stopping ( algorithm ) starting from random 7. 1 θ using x ( ) subtrain and y ( ) subtrain for training data and x ( valid ) and y ( valid ) for validation data. this returns i∗, the optimal number of steps. set to random values again. θ train on x ( ) train and
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 264
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
subtrain and y ( ) subtrain for training data and x ( valid ) and y ( valid ) for validation data. this returns i∗, the optimal number of steps. set to random values again. θ train on x ( ) train and y ( ) train for i∗steps. another strategy for using all of the data is to keep the parameters obtained from the first round of training and then continue training but now using all of the data. at this stage, we now no longer have a guide for when to stop in terms of a number of steps. instead, we can monitor the average loss function on the validation set, and continue training until it falls below the value of the training set objective at which the early stopping procedure halted. this strategy avoids the high cost of retraining the model from scratch, but is not as well - behaved. for example, there is not any guarantee that the objective on the validation set will ever reach the target value, so this strategy is not even guaranteed to terminate. this procedure is presented more formally in algorithm. 7. 3 early stopping is also useful because it reduces the computational cost of the training procedure. besides the obvious reduction in cost due to limiting the number of training iterations, it also has the ben
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 264
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
. this procedure is presented more formally in algorithm. 7. 3 early stopping is also useful because it reduces the computational cost of the training procedure. besides the obvious reduction in cost due to limiting the number of training iterations, it also has the benefit of providing regularization without requiring the addition of penalty terms to the cost function or the computation of the gradients of such additional terms. how early stopping acts as a regularizer : so far we have stated that early stopping a regularization strategy, but we have supported this claim only by is showing learning curves where the validation set error has a u - shaped curve. what 249
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 264
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 7. regularization for deep learning algorithm 7. 3 meta - algorithm using early stopping to determine at what objec - tive value we start to overfit, then continue training until that value is reached. let x ( ) train and y ( ) train be the training set. split x ( ) train and y ( ) train into ( x ( ) subtrain, x ( valid ) ) ( and y ( ) subtrain, y ( valid ) ) respectively. run early stopping ( algorithm ) starting from random 7. 1 θ using x ( ) subtrain and y ( ) subtrain for training data and x ( valid ) and y ( valid ) for validation data. this updates. θ j, ← ( θ x ( ) subtrain, y ( ) subtrain ) while j, ( θ x ( valid ), y ( valid ) ) > do train on x ( ) train and y ( ) train for steps. n end while is the actual mechanism by which early stopping regularizes the model? bishop ( ) and ( ) argued that early stopping has the [UNK] of 1995a sjoberg and ljung 1995 restricting the optimization procedure to a relatively small volume of parameter space in the neighborhood of the initial parameter value θo, as
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 265
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
model? bishop ( ) and ( ) argued that early stopping has the [UNK] of 1995a sjoberg and ljung 1995 restricting the optimization procedure to a relatively small volume of parameter space in the neighborhood of the initial parameter value θo, as illustrated in figure. more specifically, imagine taking 7. 4 τ optimization steps ( corresponding to τ training iterations ) and with learning rate. we can view the product τ as a measure of [UNK] capacity. assuming the gradient is bounded, restricting both the number of iterations and the learning rate limits the volume of parameter space reachable from θo. in this sense, τ behaves as if it were the reciprocal of the [UNK] used for weight decay. indeed, we can show how — in the case of a simple linear model with a quadratic error function and simple gradient descent — early stopping is equivalent to l2 regularization. in order to compare with classical l2 regularization, we examine a simple setting where the only parameters are linear weights ( θ = w ). we can model the cost function j with a quadratic approximation in the neighborhood of the empirically optimal value of the weights w∗ : [UNK] j ( ) = θ ( w∗ ) + 1 2 ( w w −
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 265
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
( θ = w ). we can model the cost function j with a quadratic approximation in the neighborhood of the empirically optimal value of the weights w∗ : [UNK] j ( ) = θ ( w∗ ) + 1 2 ( w w − ∗ ) h w w ( − ∗ ), ( 7. 33 ) where h is the hessian matrix of j with respect to w evaluated at w∗. given the assumption that w∗is a minimum of j ( w ), we know that h is positive semidefinite. under a local taylor series approximation, the gradient is given by : ∇w [UNK] ( ) = ( w h w w − ∗ ). ( 7. 34 ) 250
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 265
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 7. regularization for deep learning w1 w2 w∗ [UNK] w w1 w2 w∗ [UNK] w figure 7. 4 : an illustration of the [UNK] of early stopping. ( left ) the solid contour lines indicate the contours of the negative log - likelihood. the dashed line indicates the trajectory taken by sgd beginning from the origin. rather than stopping at the point w∗that minimizes the cost, early stopping results in the trajectory stopping at an earlier point [UNK]. ( right ) an illustration of the [UNK] of l2 regularization for comparison. the dashed circles indicate the contours of the l2 penalty, which causes the minimum of the total cost to lie nearer the origin than the minimum of the unregularized cost. we are going to study the trajectory followed by the parameter vector during training. for simplicity, let us set the initial parameter vector to the origin, 3 that is w ( 0 ) = 0. let us study the approximate behavior of gradient descent on j by analyzing gradient descent on [UNK] : w ( ) τ = w ( 1 ) τ− −∇ w [UNK] ( w ( 1 ) τ− ) ( 7. 35 ) = w ( 1 ) τ− −h w ( ( 1 ) τ
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 266
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
by analyzing gradient descent on [UNK] : w ( ) τ = w ( 1 ) τ− −∇ w [UNK] ( w ( 1 ) τ− ) ( 7. 35 ) = w ( 1 ) τ− −h w ( ( 1 ) τ− −w ∗ ) ( 7. 36 ) w ( ) τ −w∗ = ( ) ( i h − w ( 1 ) τ− −w∗ ). ( 7. 37 ) let us now rewrite this expression in the space of the eigenvectors ofh, exploiting the eigendecomposition of h : h = q q λ, where λ is a diagonal matrix and q is an orthonormal basis of eigenvectors. w ( ) τ −w∗ = ( i q q − λ ) ( w ( 1 ) τ− −w∗ ) ( 7. 38 ) q ( w ( ) τ −w∗ ) = ( ) i −λ q ( w ( 1 ) τ− −w∗ ) ( 7. 39 ) 3for neural networks, to obtain symmetry breaking between hidden units, we cannot initialize all the parameters to 0, as discussed in section. however, the argument holds for any other 6. 2 initial value w ( 0
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 266
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
) ( 7. 39 ) 3for neural networks, to obtain symmetry breaking between hidden units, we cannot initialize all the parameters to 0, as discussed in section. however, the argument holds for any other 6. 2 initial value w ( 0 ). 251
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 266
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 7. regularization for deep learning assuming that w ( 0 ) = 0 and that is chosen to be small enough to guarantee | 1−λi | < 1, the parameter trajectory during training after τ parameter updates is as follows : qw ( ) τ = [ ( ) i −i −λ τ ] qw∗. ( 7. 40 ) now, the expression for [UNK] in equation for 7. 13 l2 regularization can be rear - ranged as : [UNK] i = ( + λ α ) −1λq w∗ ( 7. 41 ) [UNK] i i = [ − ( + λ α ) −1α ] qw∗ ( 7. 42 ) comparing equation and equation, we see that if the hyperparameters 7. 40 7. 42, α τ, and are chosen such that ( ) i −λ τ = ( + ) λ αi −1α, ( 7. 43 ) then l2 regularization and early stopping can be seen to be equivalent ( at least under the quadratic approximation of the objective function ). going even further, by taking logarithms and using the series expansion for log ( 1 + x ), we can conclude that if all λi are small ( that is, λi
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 267
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
under the quadratic approximation of the objective function ). going even further, by taking logarithms and using the series expansion for log ( 1 + x ), we can conclude that if all λi are small ( that is, λi 1 and λi / α 1 ) then τ ≈1 α, ( 7. 44 ) α ≈1 τ. ( 7. 45 ) that is, under these assumptions, the number of training iterations τ plays a role inversely proportional to the l2 regularization parameter, and the inverse of τ plays the role of the weight decay [UNK]. parameter values corresponding to directions of significant curvature ( of the objective function ) are regularized less than directions of less curvature. of course, in the context of early stopping, this really means that parameters that correspond to directions of significant curvature tend to learn early relative to parameters corresponding to directions of less curvature. the derivations in this section have shown that a trajectory of length τ ends at a point that corresponds to a minimum of the l2 - regularized objective. early stopping is of course more than the mere restriction of the trajectory length ; instead, early stopping typically involves monitoring the validation set error in order to stop the trajectory at a particularly
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 267
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
point that corresponds to a minimum of the l2 - regularized objective. early stopping is of course more than the mere restriction of the trajectory length ; instead, early stopping typically involves monitoring the validation set error in order to stop the trajectory at a particularly good point in space. early stopping therefore has the advantage over weight decay that early stopping automatically determines the correct amount of regularization while weight decay requires many training experiments with [UNK] values of its hyperparameter. 252
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 267
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 7. regularization for deep learning 7. 9 parameter tying and parameter sharing thus far, in this chapter, when we have discussed adding constraints or penalties to the parameters, we have always done so with respect to a fixed region or point. for example, l2 regularization ( or weight decay ) penalizes model parameters for deviating from the fixed value of zero. however, sometimes we may need other ways to express our prior knowledge about suitable values of the model parameters. sometimes we might not know precisely what values the parameters should take but we know, from knowledge of the domain and model architecture, that there should be some dependencies between the model parameters. a common type of dependency that we often want to express is that certain parameters should be close to one another. consider the following scenario : we have two models performing the same classification task ( with the same set of classes ) but with somewhat [UNK] input distributions. formally, we have model a with parameters w ( ) a and model b with parameters w ( ) b. the two models map the input to two [UNK], but related outputs : [UNK] ( ) a = f ( w ( ) a, x ) and [UNK] ( ) b = ( g w ( ) b, x ). let
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 268
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
w ( ) b. the two models map the input to two [UNK], but related outputs : [UNK] ( ) a = f ( w ( ) a, x ) and [UNK] ( ) b = ( g w ( ) b, x ). let us imagine that the tasks are similar enough ( perhaps with similar input and output distributions ) that we believe the model parameters should be close to each other : [UNK], w ( ) a i should be close to w ( ) b i. we can leverage this information through regularization. specifically, we can use a parameter norm penalty of the form : ω ( w ( ) a, w ( ) b ) = w ( ) a −w ( ) b 2 2. here we used an l2 penalty, but other choices are also possible. this kind of approach was proposed by ( ), who regularized lasserre et al. 2006 the parameters of one model, trained as a classifier in a supervised paradigm, to be close to the parameters of another model, trained in an unsupervised paradigm ( to capture the distribution of the observed input data ). the architectures were constructed such that many of the parameters in the classifier model could be paired to corresponding parameters in
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 268
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
of another model, trained in an unsupervised paradigm ( to capture the distribution of the observed input data ). the architectures were constructed such that many of the parameters in the classifier model could be paired to corresponding parameters in the unsupervised model. while a parameter norm penalty is one way to regularize parameters to be close to one another, the more popular way is to use constraints : to force sets of parameters to be equal. this method of regularization is often referred to as parameter sharing, because we interpret the various models or model components as sharing a unique set of parameters. a significant advantage of parameter sharing over regularizing the parameters to be close ( via a norm penalty ) is that only a subset of the parameters ( the unique set ) need to be stored in memory. in certain models — such as the convolutional neural network — this can lead to significant reduction in the memory footprint of the model. 253
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 268
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.