text
stringlengths 35
1.54k
| source
stringclasses 1
value | page
int64 1
800
| book
stringclasses 1
value | chunk_index
int64 0
0
|
|---|---|---|---|---|
is able to represent long term dependencies, the gradient of a long term interaction has exponentially smaller magnitude than the gradient of a short term interaction. it does not mean that it is impossible to learn, but that it might take a very long time to learn long - term dependencies, because the signal about these dependencies will tend to be hidden by the smallest fluctuations arising from short - term dependencies. in practice, the experiments in ( ) show that as we increase the span of the dependencies that bengio et al. 1994 need to be captured, gradient - based optimization becomes increasingly [UNK], with the probability of successful training of a traditional rnn via sgd rapidly reaching 0 for sequences of only length 10 or 20. for a deeper treatment of recurrent networks as dynamical systems, see doya ( ), ( ) and ( ), with a review 1993 bengio et al. 1994 siegelmann and sontag 1995 in pascanu 2013 et al. ( ). the remaining sections of this chapter discuss various approaches that have been proposed to reduce the [UNK] of learning long - term dependencies ( in some cases allowing an rnn to learn dependencies across 403
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 418
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 10. sequence modeling : recurrent and recursive nets hundreds of steps ), but the problem of learning long - term dependencies remains one of the main challenges in deep learning. 10. 8 echo state networks the recurrent weights mapping from h ( 1 ) t− to h ( ) t and the input weights mapping from x ( ) t to h ( ) t are some of the most [UNK] parameters to learn in a recurrent network. one proposed (, ;, ;, ; jaeger 2003 maass et al. 2002 jaeger and haas 2004 jaeger 2007b, ) approach to avoiding this [UNK] is to set the recurrent weights such that the recurrent hidden units do a good job of capturing the history of past inputs, and learn only the output weights. this is the idea that was independently proposed for echo state networks or esns (, ;, ) jaeger and haas 2004 jaeger 2007b and liquid state machines (, ). the latter is similar, except maass et al. 2002 that it uses spiking neurons ( with binary outputs ) instead of the continuous - valued hidden units used for esns. both esns and liquid state machines are termed reservoir computing ( lukosevicius and jaeger 2009, ) to denote
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 419
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
2002 that it uses spiking neurons ( with binary outputs ) instead of the continuous - valued hidden units used for esns. both esns and liquid state machines are termed reservoir computing ( lukosevicius and jaeger 2009, ) to denote the fact that the hidden units form of reservoir of temporal features which may capture [UNK] aspects of the history of inputs. one way to think about these reservoir computing recurrent networks is that they are similar to kernel machines : they map an arbitrary length sequence ( the history of inputs up to time t ) into a fixed - length vector ( the recurrent state h ( ) t ), on which a linear predictor ( typically a linear regression ) can be applied to solve the problem of interest. the training criterion may then be easily designed to be convex as a function of the output weights. for example, if the output consists of linear regression from the hidden units to the output targets, and the training criterion is mean squared error, then it is convex and may be solved reliably with simple learning algorithms (, ). jaeger 2003 the important question is therefore : how do we set the input and recurrent weights so that a rich set of histories can be represented in the recurrent neural network state? the answer proposed in
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 419
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##bly with simple learning algorithms (, ). jaeger 2003 the important question is therefore : how do we set the input and recurrent weights so that a rich set of histories can be represented in the recurrent neural network state? the answer proposed in the reservoir computing literature is to view the recurrent net as a dynamical system, and set the input and recurrent weights such that the dynamical system is near the edge of stability. the original idea was to make the eigenvalues of the jacobian of the state - to - state transition function be close to. as explained in section, an important 1 8. 2. 5 characteristic of a recurrent network is the eigenvalue spectrum of the jacobians j ( ) t = ∂s ( ) t ∂s ( 1 ) t−. of particular importance is the spectral radius of j ( ) t, defined to be the maximum of the absolute values of its eigenvalues. 404
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 419
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 10. sequence modeling : recurrent and recursive nets to understand the [UNK] of the spectral radius, consider the simple case of back - propagation with a jacobian matrix j that does not change with t. this case happens, for example, when the network is purely linear. suppose that j has an eigenvector v with corresponding eigenvalue λ. consider what happens as we propagate a gradient vector backwards through time. if we begin with a gradient vector g, then after one step of back - propagation, we will have jg, and after n steps we will have jng. now consider what happens if we instead back - propagate a perturbed version of g. if we begin with g + δv, then after one step, we will have j ( g + δv ). after n steps, we will have j n ( g + δv ). from this we can see that back - propagation starting from g and back - propagation starting from g + δv diverge by δj nv after n steps of back - propagation. if v is chosen to be a unit eigenvector of j with eigenvalue λ, then multiplication by the jacobian simply scales the [UNK] at each step
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 420
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##v diverge by δj nv after n steps of back - propagation. if v is chosen to be a unit eigenvector of j with eigenvalue λ, then multiplication by the jacobian simply scales the [UNK] at each step. the two executions of back - propagation are separated by a distance of δ λ | | n. when v corresponds to the largest value of | | λ, this perturbation achieves the widest possible separation of an initial perturbation of size. δ when | | λ > 1, the deviation size δ λ | | n grows exponentially large. when | | λ < 1, the deviation size becomes exponentially small. of course, this example assumed that the jacobian was the same at every time step, corresponding to a recurrent network with no nonlinearity. when a nonlinearity is present, the derivative of the nonlinearity will approach zero on many time steps, and help to prevent the explosion resulting from a large spectral radius. indeed, the most recent work on echo state networks advocates using a spectral radius much larger than unity (, ;, ). yildiz et al. 2012 jaeger 2012 everything we have said about back - propagation via repeated matrix multipli - cation applies equally to
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 420
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
work on echo state networks advocates using a spectral radius much larger than unity (, ;, ). yildiz et al. 2012 jaeger 2012 everything we have said about back - propagation via repeated matrix multipli - cation applies equally to forward propagation in a network with no nonlinearity, where the state h ( + 1 ) t = h ( ) t w. when a linear map w always shrinks h as measured by the l2 norm, then we say that the map is contractive. when the spectral radius is less than one, the mapping from h ( ) t to h ( + 1 ) t is contractive, so a small change becomes smaller after each time step. this necessarily makes the network forget information about the past when we use a finite level of precision ( such as 32 bit integers ) to store the state vector. the jacobian matrix tells us how a small change of h ( ) t propagates one step forward, or equivalently, how the gradient on h ( + 1 ) t propagates one step backward, during back - propagation. note that neither w nor j need to be symmetric ( al - though they are square and real ), so they can have complex - valued eigenvalues and e
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 420
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
+ 1 ) t propagates one step backward, during back - propagation. note that neither w nor j need to be symmetric ( al - though they are square and real ), so they can have complex - valued eigenvalues and eigenvectors, with imaginary components corresponding to potentially oscillatory 405
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 420
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 10. sequence modeling : recurrent and recursive nets behavior ( if the same jacobian was applied iteratively ). even though h ( ) t or a small variation of h ( ) t of interest in back - propagation are real - valued, they can be expressed in such a complex - valued basis. what matters is what happens to the magnitude ( complex absolute value ) of these possibly complex - valued basis [UNK], when we multiply the matrix by the vector. an eigenvalue with magnitude greater than one corresponds to magnification ( exponential growth, if applied iteratively ) or shrinking ( exponential decay, if applied iteratively ). with a nonlinear map, the jacobian is free to change at each step. the dynamics therefore become more complicated. however, it remains true that a small initial variation can turn into a large variation after several steps. one [UNK] between the purely linear case and the nonlinear case is that the use of a squashing nonlinearity such as tanh can cause the recurrent dynamics to become bounded. note that it is possible for back - propagation to retain unbounded dynamics even when forward propagation has bounded dynamics, for example, when a sequence of tanh units are all in the middle of their linear regime
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 421
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
the recurrent dynamics to become bounded. note that it is possible for back - propagation to retain unbounded dynamics even when forward propagation has bounded dynamics, for example, when a sequence of tanh units are all in the middle of their linear regime and are connected by weight matrices with spectral radius greater than. however, it is 1 rare for all of the units to simultaneously lie at their linear activation point. tanh the strategy of echo state networks is simply to fix the weights to have some spectral radius such as, where information is carried forward through time but 3 does not explode due to the stabilizing [UNK] of saturating nonlinearities like tanh. more recently, it has been shown that the techniques used to set the weights in esns could be used to the weights in a fully trainable recurrent net - initialize work ( with the hidden - to - hidden recurrent weights trained using back - propagation through time ), helping to learn long - term dependencies ( sutskever 2012 sutskever, ; et al., ). in this setting, an initial spectral radius of 1. 2 performs well, combined 2013 with the sparse initialization scheme described in section. 8. 4 10. 9 leaky units and other strategies for multiple time
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 421
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
, ; et al., ). in this setting, an initial spectral radius of 1. 2 performs well, combined 2013 with the sparse initialization scheme described in section. 8. 4 10. 9 leaky units and other strategies for multiple time scales one way to deal with long - term dependencies is to design a model that operates at multiple time scales, so that some parts of the model operate at fine - grained time scales and can handle small details, while other parts operate at coarse time scales and transfer information from the distant past to the present more [UNK]. various strategies for building both fine and coarse time scales are possible. these include the addition of skip connections across time, “ leaky units ” that integrate signals with [UNK] time constants, and the removal of some of the connections 406
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 421
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 10. sequence modeling : recurrent and recursive nets used to model fine - grained time scales. 10. 9. 1 adding skip connections through time one way to obtain coarse time scales is to add direct connections from variables in the distant past to variables in the present. the idea of using such skip connections dates back to ( ) and follows from the idea of incorporating delays in lin et al. 1996 feedforward neural networks (, ). in an ordinary recurrent lang and hinton 1988 network, a recurrent connection goes from a unit at time t to a unit at time t + 1. it is possible to construct recurrent networks with longer delays (, ). bengio 1991 as we have seen in section, gradients may vanish or explode exponentially 8. 2. 5 with respect to the number of time steps. ( ) introduced recurrent lin et al. 1996 connections with a time - delay of d to mitigate this problem. gradients now diminish exponentially as a function of τ d rather than τ. since there are both delayed and single step connections, gradients may still explode exponentially in τ. this allows the learning algorithm to capture longer dependencies although not all long - term dependencies may be represented well
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 422
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
function of τ d rather than τ. since there are both delayed and single step connections, gradients may still explode exponentially in τ. this allows the learning algorithm to capture longer dependencies although not all long - term dependencies may be represented well in this way. 10. 9. 2 leaky units and a spectrum of [UNK] time scales another way to obtain paths on which the product of derivatives is close to one is to have units with linear self - connections and a weight near one on these connections. when we accumulate a running average µ ( ) t of some value v ( ) t by applying the update µ ( ) t ←αµ ( 1 ) t− + ( 1 −α ) v ( ) t the α parameter is an example of a linear self - connection from µ ( 1 ) t− to µ ( ) t. when α is near one, the running average remembers information about the past for a long time, and when α is near zero, information about the past is rapidly discarded. hidden units with linear self - connections can behave similarly to such running averages. such hidden units are called leaky units. skip connections through d time steps are a way of ensuring that a unit can always learn to be influenced by a value from d
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 422
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
with linear self - connections can behave similarly to such running averages. such hidden units are called leaky units. skip connections through d time steps are a way of ensuring that a unit can always learn to be influenced by a value from d time steps earlier. the use of a linear self - connection with a weight near one is a [UNK] way of ensuring that the unit can access values from the past. the linear self - connection approach allows this [UNK] to be adapted more smoothly and flexibly by adjusting the real - valued α rather than by adjusting the integer - valued skip length. these ideas were proposed by ( ) and by ( ). mozer 1992 el hihi and bengio 1996 leaky units were also found to be useful in the context of echo state networks (, ). jaeger et al. 2007 407
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 422
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 10. sequence modeling : recurrent and recursive nets there are two basic strategies for setting the time constants used by leaky units. one strategy is to manually fix them to values that remain constant, for example by sampling their values from some distribution once at initialization time. another strategy is to make the time constants free parameters and learn them. having such leaky units at [UNK] time scales appears to help with long - term dependencies (, ; mozer 1992 pascanu 2013 et al., ). 10. 9. 3 removing connections another approach to handle long - term dependencies is the idea of organizing the state of the rnn at multiple time - scales (, ), with el hihi and bengio 1996 information flowing more easily through long distances at the slower time scales. this idea [UNK] from the skip connections through time discussed earlier because it involves actively removing length - one connections and replacing them with longer connections. units modified in such a way are forced to operate on a long time scale. skip connections through time edges. units receiving such add new connections may learn to operate on a long time scale but may also choose to focus on their other short - term connections. there are [UNK] ways in which a group of
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 423
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
on a long time scale. skip connections through time edges. units receiving such add new connections may learn to operate on a long time scale but may also choose to focus on their other short - term connections. there are [UNK] ways in which a group of recurrent units can be forced to operate at [UNK] time scales. one option is to make the recurrent units leaky, but to have [UNK] groups of units associated with [UNK] fixed time scales. this was the proposal in ( ) and has been successfully used in mozer 1992 pascanu et al. ( ). another option is to have explicit and discrete updates taking place 2013 at [UNK] times, with a [UNK] frequency for [UNK] groups of units. this is the approach of ( ) and el hihi and bengio 1996 koutnik 2014 et al. ( ). it worked well on a number of benchmark datasets. 10. 10 the long short - term memory and other gated rnns as of this writing, the most [UNK] sequence models used in practical applications are called gated rnns. these include the long short - term memory and networks based on the. gated recurrent unit like leaky units, gated rnns are based on the idea of creating paths through time that have derivatives
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 423
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
practical applications are called gated rnns. these include the long short - term memory and networks based on the. gated recurrent unit like leaky units, gated rnns are based on the idea of creating paths through time that have derivatives that neither vanish nor explode. leaky units did this with connection weights that were either manually chosen constants or were parameters. gated rnns generalize this to connection weights that may change 408
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 423
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 10. sequence modeling : recurrent and recursive nets at each time step. × input input gate forget gate output gate output state self - loop × + × figure 10. 16 : block diagram of the lstm recurrent network “ cell. ” cells are connected recurrently to each other, replacing the usual hidden units of ordinary recurrent networks. an input feature is computed with a regular artificial neuron unit. its value can be accumulated into the state if the sigmoidal input gate allows it. the state unit has a linear self - loop whose weight is controlled by the forget gate. the output of the cell can be shut [UNK] the output gate. all the gating units have a sigmoid nonlinearity, while the input unit can have any squashing nonlinearity. the state unit can also be used as an extra input to the gating units. the black square indicates a delay of a single time step. leaky units allow the network to accumulate information ( such as evidence for a particular feature or category ) over a long duration. however, once that information has been used, it might be useful for the neural network to forget the old state. for example, if a sequence is made of sub - sequences and we
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 424
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
evidence for a particular feature or category ) over a long duration. however, once that information has been used, it might be useful for the neural network to forget the old state. for example, if a sequence is made of sub - sequences and we want a leaky unit to accumulate evidence inside each sub - subsequence, we need a mechanism to forget the old state by setting it to zero. instead of manually deciding when to clear the state, we want the neural network to learn to decide when to do it. this 409
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 424
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 10. sequence modeling : recurrent and recursive nets is what gated rnns do. 10. 10. 1 lstm the clever idea of introducing self - loops to produce paths where the gradient can flow for long durations is a core contribution of the initial long short - term memory ( lstm ) model ( hochreiter and schmidhuber 1997, ). a crucial addition has been to make the weight on this self - loop conditioned on the context, rather than fixed (, ). by making the weight of this self - loop gated ( controlled gers et al. 2000 by another hidden unit ), the time scale of integration can be changed dynamically. in this case, we mean that even for an lstm with fixed parameters, the time scale of integration can change based on the input sequence, because the time constants are output by the model itself. the lstm has been found extremely successful in many applications, such as unconstrained handwriting recognition ( graves et al., ), speech recognition ( 2009 graves 2013 graves and jaitly 2014 et al., ;, ), handwriting generation ( graves 2013, ), machine translation ( sutskever 2014 et al., ),
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 425
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
handwriting recognition ( graves et al., ), speech recognition ( 2009 graves 2013 graves and jaitly 2014 et al., ;, ), handwriting generation ( graves 2013, ), machine translation ( sutskever 2014 et al., ), image captioning (, ; kiros et al. 2014b vinyals 2014b xu 2015 et al., ; et al., ) and parsing ( vinyals 2014a et al., ). the lstm block diagram is illustrated in figure. the corresponding 10. 16 forward propagation equations are given below, in the case of a shallow recurrent network architecture. deeper architectures have also been successfully used ( graves et al., ; 2013 pascanu 2014a et al., ). instead of a unit that simply applies an element - wise nonlinearity to the [UNK] transformation of inputs and recurrent units, lstm recurrent networks have “ lstm cells ” that have an internal recurrence ( a self - loop ), in addition to the outer recurrence of the rnn. each cell has the same inputs and outputs as an ordinary recurrent network, but has more parameters and a system of gating units that controls the flow of information
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 425
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
- loop ), in addition to the outer recurrence of the rnn. each cell has the same inputs and outputs as an ordinary recurrent network, but has more parameters and a system of gating units that controls the flow of information. the most important component is the state unit s ( ) t i that has a linear self - loop similar to the leaky units described in the previous section. however, here, the self - loop weight ( or the associated time constant ) is controlled by a forget gate unit f ( ) t i ( for time step t and cell ), that sets this weight to a value between 0 and 1 via a sigmoid unit : i f ( ) t i = σ bf i + j u f i, j x ( ) t j + j wf i, j h ( 1 ) t− j, ( 10. 40 ) where x ( ) t is the current input vector and h ( ) t is the current hidden layer vector, containing the outputs of all the lstm cells, and bf, uf, w f are respectively biases, input weights and recurrent weights for the forget gates. the lstm cell 410
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 425
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 10. sequence modeling : recurrent and recursive nets internal state is thus updated as follows, but with a conditional self - loop weight f ( ) t i : s ( ) t i = f ( ) t i s ( 1 ) t− i + g ( ) t i σ b i + j ui, jx ( ) t j + j wi, jh ( 1 ) t− j, ( 10. 41 ) where b, u and w respectively denote the biases, input weights and recurrent weights into the lstm cell. the external input gate unit g ( ) t i is computed similarly to the forget gate ( with a sigmoid unit to obtain a gating value between 0 and 1 ), but with its own parameters : g ( ) t i = σ bg i + j ug i, jx ( ) t j + j wg i, jh ( 1 ) t− j. ( 10. 42 ) the output h ( ) t i of the lstm cell can also be shut [UNK], via the output gate q ( ) t i, which also uses a sigmoid unit for gating : h ( ) t i = tanh s ( ) t i q ( )
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 426
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
of the lstm cell can also be shut [UNK], via the output gate q ( ) t i, which also uses a sigmoid unit for gating : h ( ) t i = tanh s ( ) t i q ( ) t i ( 10. 43 ) q ( ) t i = σ bo i + j uo i, jx ( ) t j + j w o i, j h ( 1 ) t− j ( 10. 44 ) which has parameters bo, u o, w o for its biases, input weights and recurrent weights, respectively. among the variants, one can choose to use the cell state s ( ) t i as an extra input ( with its weight ) into the three gates of the i - th unit, as shown in figure. this would require three additional parameters. 10. 16 lstm networks have been shown to learn long - term dependencies more easily than the simple recurrent architectures, first on artificial data sets designed for testing the ability to learn long - term dependencies (, ; bengio et al. 1994 hochreiter and schmidhuber 1997 hochreiter 2001, ; et al., ), then on challenging sequence processing tasks
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 426
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
data sets designed for testing the ability to learn long - term dependencies (, ; bengio et al. 1994 hochreiter and schmidhuber 1997 hochreiter 2001, ; et al., ), then on challenging sequence processing tasks where state - of - the - art performance was obtained ( graves 2012, ; graves 2013 sutskever 2014 et al., ; et al., ). variants and alternatives to the lstm have been studied and used and are discussed next. 10. 10. 2 other gated rnns which pieces of the lstm architecture are actually necessary? what other successful architectures could be designed that allow the network to dynamically control the time scale and forgetting behavior of [UNK] units? 411
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 426
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 10. sequence modeling : recurrent and recursive nets some answers to these questions are given with the recent work on gated rnns, whose units are also known as gated recurrent units or grus (, ; cho et al. 2014b chung 2014 2015a jozefowicz 2015 chrupala 2015 et al.,, ; et al., ; et al., ). the main [UNK] with the lstm is that a single gating unit simultaneously controls the forgetting factor and the decision to update the state unit. the update equations are the following : h ( ) t i = u ( 1 ) t− i h ( 1 ) t− i + ( 1 −u ( 1 ) t− i ) σ bi + j ui, jx ( 1 ) t− j + j wi, j r ( 1 ) t− j h ( 1 ) t− j, ( 10. 45 ) where u stands for “ update ” gate and r for “ reset ” gate. their value is defined as usual : u ( ) t i = σ bu i + j uu i, jx ( ) t j + j wu i, jh ( ) t j ( 10. 46 ) and r ( ) t
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 427
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
value is defined as usual : u ( ) t i = σ bu i + j uu i, jx ( ) t j + j wu i, jh ( ) t j ( 10. 46 ) and r ( ) t i = σ br i + j u r i, jx ( ) t j + j wr i, jh ( ) t j. ( 10. 47 ) the reset and updates gates can individually “ ignore ” parts of the state vector. the update gates act like conditional leaky integrators that can linearly gate any dimension, thus choosing to copy it ( at one extreme of the sigmoid ) or completely ignore it ( at the other extreme ) by replacing it by the new “ target state ” value ( towards which the leaky integrator wants to converge ). the reset gates control which parts of the state get used to compute the next target state, introducing an additional nonlinear [UNK] in the relationship between past state and future state. many more variants around this theme can be designed. for example the reset gate ( or forget gate ) output could be shared across multiple hidden units. alternately, the product of a global gate ( covering a whole group of units, such as an entire layer ) and a
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 427
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
around this theme can be designed. for example the reset gate ( or forget gate ) output could be shared across multiple hidden units. alternately, the product of a global gate ( covering a whole group of units, such as an entire layer ) and a local gate ( per unit ) could be used to combine global control and local control. however, several investigations over architectural variations of the lstm and gru found no variant that would clearly beat both of these across a wide range of tasks (, ; [UNK] al. 2015 jozefowicz 2015 [UNK] et al., ). et al. ( ) found that a crucial ingredient is the forget gate, while 2015 jozefowicz et al. ( ) found that adding a bias of 1 to the lstm forget gate, a practice 2015 advocated by ( ), makes the lstm as strong as the best of the gers et al. 2000 explored architectural variants. 412
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 427
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 10. sequence modeling : recurrent and recursive nets 10. 11 optimization for long - term dependencies section and section have described the vanishing and exploding gradient 8. 2. 5 10. 7 problems that occur when optimizing rnns over many time steps. an interesting idea proposed by martens and sutskever 2011 ( ) is that second derivatives may vanish at the same time that first derivatives vanish. second - order optimization algorithms may roughly be understood as dividing the first derivative by the second derivative ( in higher dimension, multiplying the gradient by the inverse hessian ). if the second derivative shrinks at a similar rate to the first derivative, then the ratio of first and second derivatives may remain relatively constant. unfortunately, second - order methods have many drawbacks, including high computational cost, the need for a large minibatch, and a tendency to be attracted to saddle points. martens and sutskever 2011 ( ) found promising results using second - order methods. later, sutskever 2013 et al. ( ) found that simpler methods such as nesterov momentum with careful initialization could achieve similar results. see sutskever 2012 ( ) for more detail. both of these approaches have largely been replaced by simply using
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 428
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
sutskever 2013 et al. ( ) found that simpler methods such as nesterov momentum with careful initialization could achieve similar results. see sutskever 2012 ( ) for more detail. both of these approaches have largely been replaced by simply using sgd ( even without momentum ) applied to lstms. this is part of a continuing theme in machine learning that it is often much easier to design a model that is easy to optimize than it is to design a more powerful optimization algorithm. 10. 11. 1 clipping gradients as discussed in section, strongly nonlinear functions such as those computed 8. 2. 4 by a recurrent net over many time steps tend to have derivatives that can be either very large or very small in magnitude. this is illustrated in figure and 8. 3 figure, in which we see that the objective function ( as a function of the 10. 17 parameters ) has a “ landscape ” in which one finds “ [UNK] ” : wide and rather flat regions separated by tiny regions where the objective function changes quickly, forming a kind of [UNK]. the [UNK] that arises is that when the parameter gradient is very large, a gradient descent parameter update could throw the parameters very far, into a region where the objective function
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 428
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
separated by tiny regions where the objective function changes quickly, forming a kind of [UNK]. the [UNK] that arises is that when the parameter gradient is very large, a gradient descent parameter update could throw the parameters very far, into a region where the objective function is larger, undoing much of the work that had been done to reach the current solution. the gradient tells us the direction that corresponds to the steepest descent within an infinitesimal region surrounding the current parameters. outside of this infinitesimal region, the cost function may begin to curve back upwards. the update must be chosen to be small enough to avoid traversing too much upward curvature. we typically use learning rates that 413
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 428
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 10. sequence modeling : recurrent and recursive nets decay slowly enough that consecutive steps have approximately the same learning rate. a step size that is appropriate for a relatively linear part of the landscape is often inappropriate and causes uphill motion if we enter a more curved part of the landscape on the next step. figure 10. 17 : example of the [UNK] of gradient clipping in a recurrent network with two parameters w and b. gradient clipping can make gradient descent perform more reasonably in the vicinity of extremely steep [UNK]. these steep [UNK] commonly occur in recurrent networks near where a recurrent network behaves approximately linearly. the [UNK] exponentially steep in the number of time steps because the weight matrix is multiplied by itself once for each time step. ( left ) gradient descent without gradient clipping overshoots the bottom of this small ravine, then receives a very large gradient from the [UNK]. the large gradient catastrophically propels the parameters outside the axes of the plot. gradient descent with gradient clipping has a more moderate ( right ) reaction to the [UNK]. while it does ascend the [UNK], the step size is restricted so that it cannot be propelled away from steep region near the solution. figure adapted with permission from pascanu 2013 et al. ( )
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 429
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
moderate ( right ) reaction to the [UNK]. while it does ascend the [UNK], the step size is restricted so that it cannot be propelled away from steep region near the solution. figure adapted with permission from pascanu 2013 et al. ( ). a simple type of solution has been in use by practitioners for many years : clipping the gradient. there are [UNK] instances of this idea ( mikolov 2012, ; pascanu 2013 et al., ). one option is to clip the parameter gradient from a minibatch element - wise ( mikolov 2012, ) just before the parameter update. another is to clip the norm | | | | g of the gradient g ( pascanu 2013 et al., ) just before the parameter update : if | | | | g > v ( 10. 48 ) g ←gv | | | | g ( 10. 49 ) 414
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 429
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 10. sequence modeling : recurrent and recursive nets where v is the norm threshold and g is used to update parameters. because the gradient of all the parameters ( including [UNK] groups of parameters, such as weights and biases ) is renormalized jointly with a single scaling factor, the latter method has the advantage that it guarantees that each step is still in the gradient direction, but experiments suggest that both forms work similarly. although the parameter update has the same direction as the true gradient, with gradient norm clipping, the parameter update vector norm is now bounded. this bounded gradient avoids performing a detrimental step when the gradient explodes. in fact, even simply taking a random step when the gradient magnitude is above a threshold tends to work almost as well. if the explosion is so severe that the gradient is numerically inf or nan ( considered infinite or not - a - number ), then a random step of size v can be taken and will typically move away from the numerically unstable configuration. clipping the gradient norm per - minibatch will not change the direction of the gradient for an individual minibatch. however, taking the average of the norm - clipped gradient from many minibatches is not equivalent to clipping the norm
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 430
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
. clipping the gradient norm per - minibatch will not change the direction of the gradient for an individual minibatch. however, taking the average of the norm - clipped gradient from many minibatches is not equivalent to clipping the norm of the true gradient ( the gradient formed from using all examples ). examples that have large gradient norm, as well as examples that appear in the same minibatch as such examples, will have their contribution to the final direction diminished. this stands in contrast to traditional minibatch gradient descent, where the true gradient direction is equal to the average over all minibatch gradients. put another way, traditional stochastic gradient descent uses an unbiased estimate of the gradient, while gradient descent with norm clipping introduces a heuristic bias that we know empirically to be useful. with element - wise clipping, the direction of the update is not aligned with the true gradient or the minibatch gradient, but it is still a descent direction. it has also been proposed ( graves 2013, ) to clip the back - propagated gradient ( with respect to hidden units ) but no comparison has been published between these variants ; we conjecture that all these methods behave similarly. 10. 11. 2 regularizing
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 430
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
also been proposed ( graves 2013, ) to clip the back - propagated gradient ( with respect to hidden units ) but no comparison has been published between these variants ; we conjecture that all these methods behave similarly. 10. 11. 2 regularizing to encourage information flow gradient clipping helps to deal with exploding gradients, but it does not help with vanishing gradients. to address vanishing gradients and better capture long - term dependencies, we discussed the idea of creating paths in the computational graph of the unfolded recurrent architecture along which the product of gradients associated with arcs is near 1. one approach to achieve this is with lstms and other self - loops and gating mechanisms, described above in section. another idea is 10. 10 to regularize or constrain the parameters so as to encourage “ information flow. ” in particular, we would like the gradient vector ∇h ( ) t l being back - propagated to 415
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 430
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 10. sequence modeling : recurrent and recursive nets maintain its magnitude, even if the loss function only penalizes the output at the end of the sequence. formally, we want ( ∇h ( ) t l ) ∂h ( ) t ∂h ( 1 ) t− ( 10. 50 ) to be as large as ∇h ( ) t l. ( 10. 51 ) with this objective, pascanu 2013 et al. ( ) propose the following regularizer : ω = t | ∇ ( h ( ) t l ) ∂h ( ) t ∂h ( 1 ) t− | | | ∇h ( ) t l | | −1 2. ( 10. 52 ) computing the gradient of this regularizer may appear [UNK], but pascanu et al. ( ) propose an approximation in which we consider the back - propagated 2013 vectors ∇h ( ) t l as if they were constants ( for the purpose of this regularizer, so that there is no need to back - propagate through them ). the experiments with this regularizer suggest that, if combined with the norm clipping heuristic ( which handles gradient explosion ), the regularizer can considerably increase the span of the dependencies that an rn
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 431
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
back - propagate through them ). the experiments with this regularizer suggest that, if combined with the norm clipping heuristic ( which handles gradient explosion ), the regularizer can considerably increase the span of the dependencies that an rnn can learn. because it keeps the rnn dynamics on the edge of explosive gradients, the gradient clipping is particularly important. without gradient clipping, gradient explosion prevents learning from succeeding. a key weakness of this approach is that it is not as [UNK] as the lstm for tasks where data is abundant, such as language modeling. 10. 12 explicit memory intelligence requires knowledge and acquiring knowledge can be done via learning, which has motivated the development of large - scale deep architectures. however, there are [UNK] kinds of knowledge. some knowledge can be implicit, sub - conscious, and [UNK] to verbalize — such as how to walk, or how a dog looks [UNK] from a cat. other knowledge can be explicit, declarative, and relatively straightforward to put into words — every day commonsense knowledge, like “ a cat is a kind of animal, ” or very specific facts that you need to know to accomplish your current goals, like “ the meeting with the sales team is at 3 : 00
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 431
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
words — every day commonsense knowledge, like “ a cat is a kind of animal, ” or very specific facts that you need to know to accomplish your current goals, like “ the meeting with the sales team is at 3 : 00 pm in room 141. ” neural networks excel at storing implicit knowledge. however, they struggle to memorize facts. stochastic gradient descent requires many presentations of the 416
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 431
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 10. sequence modeling : recurrent and recursive nets task network, controlling the memory memory cells writing mechanism reading mechanism figure 10. 18 : a schematic of an example of a network with an explicit memory, capturing some of the key design elements of the neural turing machine. in this diagram we distinguish the “ representation ” part of the model ( the “ task network, ” here a recurrent net in the bottom ) from the “ memory ” part of the model ( the set of cells ), which can store facts. the task network learns to “ control ” the memory, deciding where to read from and where to write to within the memory ( through the reading and writing mechanisms, indicated by bold arrows pointing at the reading and writing addresses ). 417
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 432
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 10. sequence modeling : recurrent and recursive nets same input before it can be stored in a neural network parameters, and even then, that input will not be stored especially precisely. graves 2014b et al. ( ) hypothesized that this is because neural networks lack the equivalent of the working memory system that allows human beings to explicitly hold and manipulate pieces of information that are relevant to achieving some goal. such explicit memory components would allow our systems not only to rapidly and “ intentionally ” store and retrieve specific facts but also to sequentially reason with them. the need for neural networks that can process information in a sequence of steps, changing the way the input is fed into the network at each step, has long been recognized as important for the ability to reason rather than to make automatic, intuitive responses to the input (, ). hinton 1990 to resolve this [UNK], weston 2014 et al. ( ) introduced memory networks that include a set of memory cells that can be accessed via an addressing mecha - nism. memory networks originally required a supervision signal instructing them how to use their memory cells. graves 2014b et al. ( ) introduced the neural turing machine, which is able to learn to read from and write arbitrary content to
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 433
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
mecha - nism. memory networks originally required a supervision signal instructing them how to use their memory cells. graves 2014b et al. ( ) introduced the neural turing machine, which is able to learn to read from and write arbitrary content to memory cells without explicit supervision about which actions to undertake, and allowed end - to - end training without this supervision signal, via the use of a content - based soft attention mechanism ( see ( ) and sec - bahdanau et al. 2015 tion ). this soft addressing mechanism has become standard with other 12. 4. 5. 1 related architectures emulating algorithmic mechanisms in a way that still allows gradient - based optimization (, ; sukhbaatar et al. 2015 joulin and mikolov 2015, ; kumar 2015 vinyals 2015a grefenstette 2015 et al., ; et al., ; et al., ). each memory cell can be thought of as an extension of the memory cells in lstms and grus. the [UNK] is that the network outputs an internal state that chooses which cell to read from or write to, just as memory accesses in a digital computer read from or write to a specific address. it is [UNK] to opt
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 433
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
grus. the [UNK] is that the network outputs an internal state that chooses which cell to read from or write to, just as memory accesses in a digital computer read from or write to a specific address. it is [UNK] to optimize functions that produce exact, integer addresses. to alleviate this problem, ntms actually read to or write from many memory cells simultaneously. to read, they take a weighted average of many cells. to write, they modify multiple cells by [UNK] amounts. the [UNK] for these operations are chosen to be focused on a small number of cells, for example, by producing them via a softmax function. using these weights with non - zero derivatives allows the functions controlling access to the memory to be optimized using gradient descent. the gradient on these [UNK] indicates whether each of them should be increased or decreased, but the gradient will typically be large only for those memory addresses receiving a large [UNK]. these memory cells are typically augmented to contain a vector, rather than 418
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 433
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 10. sequence modeling : recurrent and recursive nets the single scalar stored by an lstm or gru memory cell. there are two reasons to increase the size of the memory cell. one reason is that we have increased the cost of accessing a memory cell. we pay the computational cost of producing a [UNK] for many cells, but we expect these [UNK] to cluster around a small number of cells. by reading a vector value, rather than a scalar value, we can [UNK] some of this cost. another reason to use vector - valued memory cells is that they allow for content - based addressing, where the weight used to read to or write from a cell is a function of that cell. vector - valued cells allow us to retrieve a complete vector - valued memory if we are able to produce a pattern that matches some but not all of its elements. this is analogous to the way that people can recall the lyrics of a song based on a few words. we can think of a content - based read instruction as saying, “ retrieve the lyrics of the song that has the chorus ‘ we all live in a yellow submarine. ’ ” content - based addressing is more useful when we make the objects to be retrieved large — if every letter of the song was
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 434
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
instruction as saying, “ retrieve the lyrics of the song that has the chorus ‘ we all live in a yellow submarine. ’ ” content - based addressing is more useful when we make the objects to be retrieved large — if every letter of the song was stored in a separate memory cell, we would not be able to find them this way. by comparison, location - based addressing is not allowed to refer to the content of the memory. we can think of a location - based read instruction as saying “ retrieve the lyrics of the song in slot 347. ” location - based addressing can often be a perfectly sensible mechanism even when the memory cells are small. if the content of a memory cell is copied ( not forgotten ) at most time steps, then the information it contains can be propagated forward in time and the gradients propagated backward in time without either vanishing or exploding. the explicit memory approach is illustrated in figure, where we see that 10. 18 a “ task neural network ” is coupled with a memory. although that task neural network could be feedforward or recurrent, the overall system is a recurrent network. the task network can choose to read from or write to specific memory addresses. explicit memory seems to allow models
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 434
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
a memory. although that task neural network could be feedforward or recurrent, the overall system is a recurrent network. the task network can choose to read from or write to specific memory addresses. explicit memory seems to allow models to learn tasks that ordinary rnns or lstm rnns cannot learn. one reason for this advantage may be because information and gradients can be propagated ( forward in time or backwards in time, respectively ) for very long durations. as an alternative to back - propagation through weighted averages of memory cells, we can interpret the memory addressing [UNK] as probabilities and stochastically read just one cell ( zaremba and sutskever 2015, ). optimizing models that make discrete decisions requires specialized optimization algorithms, described in section. so far, training these stochastic architectures that make discrete 20. 9. 1 decisions remains harder than training deterministic algorithms that make soft decisions. whether it is soft ( allowing back - propagation ) or stochastic and hard, the 419
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 434
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 10. sequence modeling : recurrent and recursive nets mechanism for choosing an address is in its form identical to the attention mechanism which had been previously introduced in the context of machine translation (, ) and discussed in section. the idea bahdanau et al. 2015 12. 4. 5. 1 of attention mechanisms for neural networks was introduced even earlier, in the context of handwriting generation ( graves 2013, ), with an attention mechanism that was constrained to move only forward in time through the sequence. in the case of machine translation and memory networks, at each step, the focus of attention can move to a completely [UNK] place, compared to the previous step. recurrent neural networks provide a way to extend deep learning to sequential data. they are the last major tool in our deep learning toolbox. our discussion now moves to how to choose and use these tools and how to apply them to real - world tasks. 420
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 435
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 11 practical methodology successfully applying deep learning techniques requires more than just a good knowledge of what algorithms exist and the principles that explain how they work. a good machine learning practitioner also needs to know how to choose an algorithm for a particular application and how to monitor and respond to feedback obtained from experiments in order to improve a machine learning system. during day to day development of machine learning systems, practitioners need to decide whether to gather more data, increase or decrease model capacity, add or remove regularizing features, improve the optimization of a model, improve approximate inference in a model, or debug the software implementation of the model. all of these operations are at the very least time - consuming to try out, so it is important to be able to determine the right course of action rather than blindly guessing. most of this book is about [UNK] machine learning models, training algo - rithms, and objective functions. this may give the impression that the most important ingredient to being a machine learning expert is knowing a wide variety of machine learning techniques and being good at [UNK] kinds of math. in prac - tice, one can usually do much better with a correct application of a commonplace algorithm than by sloppily applying an obscure algorithm. correct application of an algorithm depends on
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 436
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
learning techniques and being good at [UNK] kinds of math. in prac - tice, one can usually do much better with a correct application of a commonplace algorithm than by sloppily applying an obscure algorithm. correct application of an algorithm depends on mastering some fairly simple methodology. many of the recommendations in this chapter are adapted from ( ). ng 2015 we recommend the following practical design process : • determine your goals — what error metric to use, and your target value for this error metric. these goals and error metrics should be driven by the problem that the application is intended to solve. • establish a working end - to - end pipeline as soon as possible, including the 421
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 436
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 11. practical methodology estimation of the appropriate performance metrics. • instrument the system well to determine bottlenecks in performance. diag - nose which components are performing worse than expected and whether it is due to overfitting, underfitting, or a defect in the data or software. • repeatedly make incremental changes such as gathering new data, adjusting hyperparameters, or changing algorithms, based on specific findings from your instrumentation. as a running example, we will use street view address number transcription system (, ). the purpose of this application is to add goodfellow et al. 2014d buildings to google maps. street view cars photograph the buildings and record the gps coordinates associated with each photograph. a convolutional network recognizes the address number in each photograph, allowing the google maps database to add that address in the correct location. the story of how this commercial application was developed gives an example of how to follow the design methodology we advocate. we now describe each of the steps in this process. 11. 1 performance metrics determining your goals, in terms of which error metric to use, is a necessary first step because your error metric will guide all of your future actions. you should also have an idea
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 437
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
of the steps in this process. 11. 1 performance metrics determining your goals, in terms of which error metric to use, is a necessary first step because your error metric will guide all of your future actions. you should also have an idea of what level of performance you desire. keep in mind that for most applications, it is impossible to achieve absolute zero error. the bayes error defines the minimum error rate that you can hope to achieve, even if you have infinite training data and can recover the true probability distribution. this is because your input features may not contain complete information about the output variable, or because the system might be intrinsically stochastic. you will also be limited by having a finite amount of training data. the amount of training data can be limited for a variety of reasons. when your goal is to build the best possible real - world product or service, you can typically collect more data but must determine the value of reducing error further and weigh this against the cost of collecting more data. data collection can require time, money, or human [UNK] ( for example, if your data collection process involves performing invasive medical tests ). when your goal is to answer a scientific question about which algorithm performs better
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 437
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
of collecting more data. data collection can require time, money, or human [UNK] ( for example, if your data collection process involves performing invasive medical tests ). when your goal is to answer a scientific question about which algorithm performs better on a fixed benchmark, the benchmark 422
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 437
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 11. practical methodology specification usually determines the training set and you are not allowed to collect more data. how can one determine a reasonable level of performance to expect? typically, in the academic setting, we have some estimate of the error rate that is attainable based on previously published benchmark results. in the real - word setting, we have some idea of the error rate that is necessary for an application to be safe, cost - [UNK], or appealing to consumers. once you have determined your realistic desired error rate, your design decisions will be guided by reaching this error rate. another important consideration besides the target value of the performance metric is the choice of which metric to use. several [UNK] performance metrics may be used to measure the [UNK] of a complete application that includes machine learning components. these performance metrics are usually [UNK] from the cost function used to train the model. as described in section, it is 5. 1. 2 common to measure the accuracy, or equivalently, the error rate, of a system. however, many applications require more advanced metrics. sometimes it is much more costly to make one kind of a mistake than another. for example, an e - mail spam detection system can make two kinds of mistakes : incorrectly classifying a
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 438
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
system. however, many applications require more advanced metrics. sometimes it is much more costly to make one kind of a mistake than another. for example, an e - mail spam detection system can make two kinds of mistakes : incorrectly classifying a legitimate message as spam, and incorrectly allowing a spam message to appear in the inbox. it is much worse to block a legitimate message than to allow a questionable message to pass through. rather than measuring the error rate of a spam classifier, we may wish to measure some form of total cost, where the cost of blocking legitimate messages is higher than the cost of allowing spam messages. sometimes we wish to train a binary classifier that is intended to detect some rare event. for example, we might design a medical test for a rare disease. suppose that only one in every million people has this disease. we can easily achieve 99. 9999 % accuracy on the detection task, by simply hard - coding the classifier to always report that the disease is absent. clearly, accuracy is a poor way to characterize the performance of such a system. one way to solve this problem is to instead measure precision and recall. precision is the fraction of detections reported by the model that
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 438
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
report that the disease is absent. clearly, accuracy is a poor way to characterize the performance of such a system. one way to solve this problem is to instead measure precision and recall. precision is the fraction of detections reported by the model that were correct, while recall is the fraction of true events that were detected. a detector that says no one has the disease would achieve perfect precision, but zero recall. a detector that says everyone has the disease would achieve perfect recall, but precision equal to the percentage of people who have the disease ( 0. 0001 % in our example of a disease that only one people in a million have ). when using precision and recall, it is common to plot a pr curve, with precision on the y - axis and recall on the x - axis. the classifier generates a score that is higher if the event to be detected occurred. for example, a feedforward 423
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 438
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 11. practical methodology network designed to detect a disease outputs [UNK] = p ( y = 1 | x ), estimating the probability that a person whose medical results are described by features x has the disease. we choose to report a detection whenever this score exceeds some threshold. by varying the threshold, we can trade precision for recall. in many cases, we wish to summarize the performance of the classifier with a single number rather than a curve. to do so, we can convert precision p and recall r into an f - score given by f = 2pr p r +. ( 11. 1 ) another option is to report the total area lying beneath the pr curve. in some applications, it is possible for the machine learning system to refuse to make a decision. this is useful when the machine learning algorithm can estimate how confident it should be about a decision, especially if a wrong decision can be harmful and if a human operator is able to occasionally take over. the street view transcription system provides an example of this situation. the task is to transcribe the address number from a photograph in order to associate the location where the photo was taken with the correct address in a map. because the value of the map degrades considerably if the
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 439
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
an example of this situation. the task is to transcribe the address number from a photograph in order to associate the location where the photo was taken with the correct address in a map. because the value of the map degrades considerably if the map is inaccurate, it is important to add an address only if the transcription is correct. if the machine learning system thinks that it is less likely than a human being to obtain the correct transcription, then the best course of action is to allow a human to transcribe the photo instead. of course, the machine learning system is only useful if it is able to dramatically reduce the amount of photos that the human operators must process. a natural performance metric to use in this situation is coverage. coverage is the fraction of examples for which the machine learning system is able to produce a response. it is possible to trade coverage for accuracy. one can always obtain 100 % accuracy by refusing to process any example, but this reduces the coverage to 0 %. for the street view task, the goal for the project was to reach human - level transcription accuracy while maintaining 95 % coverage. human - level performance on this task is 98 % accuracy. many other metrics are possible. we can for example, measure click - through rates, collect user
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 439
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
goal for the project was to reach human - level transcription accuracy while maintaining 95 % coverage. human - level performance on this task is 98 % accuracy. many other metrics are possible. we can for example, measure click - through rates, collect user satisfaction surveys, and so on. many specialized application areas have application - specific criteria as well. what is important is to determine which performance metric to improve ahead of time, then concentrate on improving this metric. without clearly defined goals, it can be [UNK] to tell whether changes to a machine learning system make progress or not. 424
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 439
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 11. practical methodology 11. 2 default baseline models after choosing performance metrics and goals, the next step in any practical application is to establish a reasonable end - to - end system as soon as possible. in this section, we provide recommendations for which algorithms to use as the first baseline approach in various situations. keep in mind that deep learning research progresses quickly, so better default algorithms are likely to become available soon after this writing. depending on the complexity of your problem, you may even want to begin without using deep learning. if your problem has a chance of being solved by just choosing a few linear weights correctly, you may want to begin with a simple statistical model like logistic regression. if you know that your problem falls into an “ ai - complete ” category like object recognition, speech recognition, machine translation, and so on, then you are likely to do well by beginning with an appropriate deep learning model. first, choose the general category of model based on the structure of your data. if you want to perform supervised learning with fixed - size vectors as input, use a feedforward network with fully connected layers. if the input has known topological structure ( for example, if the input is an image ), use a convolutional network.
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 440
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
supervised learning with fixed - size vectors as input, use a feedforward network with fully connected layers. if the input has known topological structure ( for example, if the input is an image ), use a convolutional network. in these cases, you should begin by using some kind of piecewise linear unit ( relus or their generalizations like leaky relus, prelus and maxout ). if your input or output is a sequence, use a gated recurrent net ( lstm or gru ). a reasonable choice of optimization algorithm is sgd with momentum with a decaying learning rate ( popular decay schemes that perform better or worse on [UNK] problems include decaying linearly until reaching a fixed minimum learning rate, decaying exponentially, or decreasing the learning rate by a factor of 2 - 10 each time validation error plateaus ). another very reasonable alternative is adam. batch normalization can have a dramatic [UNK] on optimization performance, especially for convolutional networks and networks with sigmoidal nonlinearities. while it is reasonable to omit batch normalization from the very first baseline, it should be introduced quickly if optimization appears to be problematic. unless your training set contains tens of millions of examples or
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 440
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
and networks with sigmoidal nonlinearities. while it is reasonable to omit batch normalization from the very first baseline, it should be introduced quickly if optimization appears to be problematic. unless your training set contains tens of millions of examples or more, you should include some mild forms of regularization from the start. early stopping should be used almost universally. dropout is an excellent regularizer that is easy to implement and compatible with many models and training algorithms. batch normalization also sometimes reduces generalization error and allows dropout to be omitted, due to the noise in the estimate of the statistics used to normalize each variable. 425
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 440
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 11. practical methodology if your task is similar to another task that has been studied extensively, you will probably do well by first copying the model and algorithm that is already known to perform best on the previously studied task. you may even want to copy a trained model from that task. for example, it is common to use the features from a convolutional network trained on imagenet to solve other computer vision tasks (, ). girshick et al. 2015 a common question is whether to begin by using unsupervised learning, de - scribed further in part. this is somewhat domain specific. some domains, such iii as natural language processing, are known to benefit tremendously from unsuper - vised learning techniques such as learning unsupervised word embeddings. in other domains, such as computer vision, current unsupervised learning techniques do not bring a benefit, except in the semi - supervised setting, when the number of labeled examples is very small (, ; kingma et al. 2014 rasmus 2015 et al., ). if your application is in a context where unsupervised learning is known to be important, then include
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 441
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
, when the number of labeled examples is very small (, ; kingma et al. 2014 rasmus 2015 et al., ). if your application is in a context where unsupervised learning is known to be important, then include it in your first end - to - end baseline. otherwise, only use unsupervised learning in your first attempt if the task you want to solve is unsupervised. you can always try adding unsupervised learning later if you observe that your initial baseline overfits. 11. 3 determining whether to gather more data after the first end - to - end system is established, it is time to measure the perfor - mance of the algorithm and determine how to improve it. many machine learning novices are tempted to make improvements by trying out many [UNK] algorithms. however, it is often much better to gather more data than to improve the learning algorithm. how does one decide whether to gather more data? first, determine whether the performance on the training set is acceptable. if performance on the training set is poor, the learning algorithm is not using the training data that is already available, so there is no reason to gather more data. instead, try increasing the size of the
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 441
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
the performance on the training set is acceptable. if performance on the training set is poor, the learning algorithm is not using the training data that is already available, so there is no reason to gather more data. instead, try increasing the size of the model by adding more layers or adding more hidden units to each layer. also, try improving the learning algorithm, for example by tuning the learning rate hyperparameter. if large models and carefully tuned optimization algorithms do not work well, then the problem might be the of the training data. the quality data may be too noisy or may not include the right inputs needed to predict the desired outputs. this suggests starting over, collecting cleaner data or collecting a richer set of features. if the performance on the training set is acceptable, then measure the per - 426
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 441
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 11. practical methodology formance on a test set. if the performance on the test set is also acceptable, then there is nothing left to be done. if test set performance is much worse than training set performance, then gathering more data is one of the most [UNK] solutions. the key considerations are the cost and feasibility of gathering more data, the cost and feasibility of reducing the test error by other means, and the amount of data that is expected to be necessary to improve test set performance significantly. at large internet companies with millions or billions of users, it is feasible to gather large datasets, and the expense of doing so can be considerably less than the other alternatives, so the answer is almost always to gather more training data. for example, the development of large labeled datasets was one of the most important factors in solving object recognition. in other contexts, such as medical applications, it may be costly or infeasible to gather more data. a simple alternative to gathering more data is to reduce the size of the model or improve regularization, by adjusting hyperparameters such as weight decay [UNK], or by adding regularization strategies such as dropout. if you find that the gap between train and test performance is still unacceptable
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 442
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
to reduce the size of the model or improve regularization, by adjusting hyperparameters such as weight decay [UNK], or by adding regularization strategies such as dropout. if you find that the gap between train and test performance is still unacceptable even after tuning the regularization hyperparameters, then gathering more data is advisable. when deciding whether to gather more data, it is also necessary to decide how much to gather. it is helpful to plot curves showing the relationship between training set size and generalization error, like in figure. by extrapolating such 5. 4 curves, one can predict how much additional training data would be needed to achieve a certain level of performance. usually, adding a small fraction of the total number of examples will not have a noticeable impact on generalization error. it is therefore recommended to experiment with training set sizes on a logarithmic scale, for example doubling the number of examples between consecutive experiments. if gathering much more data is not feasible, the only other way to improve generalization error is to improve the learning algorithm itself. this becomes the domain of research and not the domain of advice for applied practitioners. 11. 4 selecting hyperparameters most deep learning algorithms come with many hyperparameters that
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 442
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
way to improve generalization error is to improve the learning algorithm itself. this becomes the domain of research and not the domain of advice for applied practitioners. 11. 4 selecting hyperparameters most deep learning algorithms come with many hyperparameters that control many aspects of the algorithm ’ s behavior. some of these hyperparameters [UNK] the time and memory cost of running the algorithm. some of these hyperparameters [UNK] the quality of the model recovered by the training process and its ability to infer correct results when deployed on new inputs. there are two basic approaches to choosing these hyperparameters : choosing them manually and choosing them automatically. choosing the hyperparameters 427
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 442
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 11. practical methodology manually requires understanding what the hyperparameters do and how machine learning models achieve good generalization. automatic hyperparameter selection algorithms greatly reduce the need to understand these ideas, but they are often much more computationally costly. 11. 4. 1 manual hyperparameter tuning to set hyperparameters manually, one must understand the relationship between hyperparameters, training error, generalization error and computational resources ( memory and runtime ). this means establishing a solid foundation on the fun - damental ideas concerning the [UNK] capacity of a learning algorithm from chapter. 5 the goal of manual hyperparameter search is usually to find the lowest general - ization error subject to some runtime and memory budget. we do not discuss how to determine the runtime and memory impact of various hyperparameters here because this is highly platform - dependent. the primary goal of manual hyperparameter search is to adjust the [UNK] capacity of the model to match the complexity of the task. [UNK] capacity is constrained by three factors : the representational capacity of the model, the ability of the learning algorithm to successfully minimize the cost function used to train the model, and the degree to which the cost function and training procedure regularize the model. a model with
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 443
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
constrained by three factors : the representational capacity of the model, the ability of the learning algorithm to successfully minimize the cost function used to train the model, and the degree to which the cost function and training procedure regularize the model. a model with more layers and more hidden units per layer has higher representational capacity — it is capable of representing more complicated functions. it can not necessarily actually learn all of these functions though, if the training algorithm cannot discover that certain functions do a good job of minimizing the training cost, or if regularization terms such as weight decay forbid some of these functions. the generalization error typically follows a u - shaped curve when plotted as a function of one of the hyperparameters, as in figure. at one extreme, the 5. 3 hyperparameter value corresponds to low capacity, and generalization error is high because training error is high. this is the underfitting regime. at the other extreme, the hyperparameter value corresponds to high capacity, and the generalization error is high because the gap between training and test error is high. somewhere in the middle lies the optimal model capacity, which achieves the lowest possible generalization error, by adding a medium generalization gap to a medium amount of training error
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 443
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
generalization error is high because the gap between training and test error is high. somewhere in the middle lies the optimal model capacity, which achieves the lowest possible generalization error, by adding a medium generalization gap to a medium amount of training error. for some hyperparameters, overfitting occurs when the value of the hyper - parameter is large. the number of hidden units in a layer is one such example, 428
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 443
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 11. practical methodology because increasing the number of hidden units increases the capacity of the model. for some hyperparameters, overfitting occurs when the value of the hyperparame - ter is small. for example, the smallest allowable weight decay [UNK] of zero corresponds to the greatest [UNK] capacity of the learning algorithm. not every hyperparameter will be able to explore the entire u - shaped curve. many hyperparameters are discrete, such as the number of units in a layer or the number of linear pieces in a maxout unit, so it is only possible to visit a few points along the curve. some hyperparameters are binary. usually these hyperparameters are switches that specify whether or not to use some optional component of the learning algorithm, such as a preprocessing step that normalizes the input features by subtracting their mean and dividing by their standard deviation. these hyperparameters can only explore two points on the curve. other hyperparameters have some minimum or maximum value that prevents them from exploring some part of the curve. for example, the minimum weight decay [UNK] is zero. this means that if the model is underfitting when weight decay is zero, we can not enter the overfitting region
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 444
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
or maximum value that prevents them from exploring some part of the curve. for example, the minimum weight decay [UNK] is zero. this means that if the model is underfitting when weight decay is zero, we can not enter the overfitting region by modifying the weight decay [UNK]. in other words, some hyperparameters can only subtract capacity. the learning rate is perhaps the most important hyperparameter. if you have time to tune only one hyperparameter, tune the learning rate. it con - trols the [UNK] capacity of the model in a more complicated way than other hyperparameters — the [UNK] capacity of the model is highest when the learning rate is correct for the optimization problem, not when the learning rate is especially large or especially small. the learning rate has a u - shaped curve for training error, illustrated in figure. when the learning rate is too large, gradient descent 11. 1 can inadvertently increase rather than decrease the training error. in the idealized quadratic case, this occurs if the learning rate is at least twice as large as its optimal value (, ). when the learning rate is too small, training lecun et al. 1998a is not only slower, but may become permanently stuck with a high
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 444
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
this occurs if the learning rate is at least twice as large as its optimal value (, ). when the learning rate is too small, training lecun et al. 1998a is not only slower, but may become permanently stuck with a high training error. this [UNK] is poorly understood ( it would not happen for a convex loss function ). tuning the parameters other than the learning rate requires monitoring both training and test error to diagnose whether your model is overfitting or underfitting, then adjusting its capacity appropriately. if your error on the training set is higher than your target error rate, you have no choice but to increase capacity. if you are not using regularization and you are confident that your optimization algorithm is performing correctly, then you must add more layers to your network or add more hidden units. unfortunately, this increases the computational costs associated with the model. if your error on the test set is higher than than your target error rate, you can 429
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 444
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 11. practical methodology 10−2 10−1 100 learning rate ( logarithmic scale ) 0 1 2 3 4 5 6 7 8 training error figure 11. 1 : typical relationship between the learning rate and the training error. notice the sharp rise in error when the learning is above an optimal value. this is for a fixed training time, as a smaller learning rate may sometimes only slow down training by a factor proportional to the learning rate reduction. generalization error can follow this curve or be complicated by regularization [UNK] arising out of having a too large or too small learning rates, since poor optimization can, to some degree, reduce or prevent overfitting, and even points with equivalent training error can have [UNK] generalization error. now take two kinds of actions. the test error is the sum of the training error and the gap between training and test error. the optimal test error is found by trading [UNK] quantities. neural networks typically perform best when the training error is very low ( and thus, when capacity is high ) and the test error is primarily driven by the gap between train and test error. your goal is to reduce this gap without increasing training error faster than the gap decreases. to reduce the gap, change regularization hyperparameters to reduce [UNK] model
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 445
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
) and the test error is primarily driven by the gap between train and test error. your goal is to reduce this gap without increasing training error faster than the gap decreases. to reduce the gap, change regularization hyperparameters to reduce [UNK] model capacity, such as by adding dropout or weight decay. usually the best performance comes from a large model that is regularized well, for example by using dropout. most hyperparameters can be set by reasoning about whether they increase or decrease model capacity. some examples are included in table. 11. 1 while manually tuning hyperparameters, do not lose sight of your end goal : good performance on the test set. adding regularization is only one way to achieve this goal. as long as you have low training error, you can always reduce general - ization error by collecting more training data. the brute force way to practically guarantee success is to continually increase model capacity and training set size until the task is solved. this approach does of course increase the computational cost of training and inference, so it is only feasible given appropriate resources. in 430
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 445
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 11. practical methodology hyperparameter increases capacity when... reason caveats number of hid - den units increased increasing the number of hidden units increases the representational capacity of the model. increasing the number of hidden units increases both the time and memory cost of essentially every op - eration on the model. learning rate tuned op - timally an improper learning rate, whether too high or too low, results in a model with low [UNK] capacity due to optimization failure convolution ker - nel width increased increasing the kernel width increases the number of pa - rameters in the model a wider kernel results in a narrower output dimen - sion, reducing model ca - pacity unless you use im - plicit zero padding to re - duce this [UNK]. wider kernels require more mem - ory for parameter storage and increase runtime, but a narrower output reduces memory cost. implicit zero padding increased adding implicit zeros be - fore convolution keeps the representation size large increased time and mem - ory cost of most opera - tions. weight decay co - [UNK] decreased decreasing the weight de - cay [UNK] frees the model parameters to be - come larger dropout rate decreased dropping units less often gives
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 446
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
large increased time and mem - ory cost of most opera - tions. weight decay co - [UNK] decreased decreasing the weight de - cay [UNK] frees the model parameters to be - come larger dropout rate decreased dropping units less often gives the units more oppor - tunities to “ conspire ” with each other to fit the train - ing set table 11. 1 : the [UNK] of various hyperparameters on model capacity. 431
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 446
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 11. practical methodology principle, this approach could fail due to optimization [UNK], but for many problems optimization does not seem to be a significant barrier, provided that the model is chosen appropriately. 11. 4. 2 automatic hyperparameter optimization algorithms the ideal learning algorithm just takes a dataset and outputs a function, without requiring hand - tuning of hyperparameters. the popularity of several learning algorithms such as logistic regression and svms stems in part from their ability to perform well with only one or two tuned hyperparameters. neural networks can sometimes perform well with only a small number of tuned hyperparameters, but often benefit significantly from tuning of forty or more hyperparameters. manual hyperparameter tuning can work very well when the user has a good starting point, such as one determined by others having worked on the same type of application and architecture, or when the user has months or years of experience in exploring hyperparameter values for neural networks applied to similar tasks. however, for many applications, these starting points are not available. in these cases, automated algorithms can find useful values of the hyperparameters. if we think about the way in which the user of a learning algorithm
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 447
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
applied to similar tasks. however, for many applications, these starting points are not available. in these cases, automated algorithms can find useful values of the hyperparameters. if we think about the way in which the user of a learning algorithm searches for good values of the hyperparameters, we realize that an optimization is taking place : we are trying to find a value of the hyperparameters that optimizes an objective function, such as validation error, sometimes under constraints ( such as a budget for training time, memory or recognition time ). it is therefore possible, in principle, to develop hyperparameter optimization algorithms that wrap a learning algorithm and choose its hyperparameters, thus hiding the hyperparameters of the learning algorithm from the user. unfortunately, hyperparameter optimization algorithms often have their own hyperparameters, such as the range of values that should be explored for each of the learning algorithm ’ s hyperparameters. however, these secondary hyperparameters are usually easier to choose, in the sense that acceptable performance may be achieved on a wide range of tasks using the same secondary hyperparameters for all tasks. 11. 4. 3 grid search when there are three or fewer hyperparameters,
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 447
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
usually easier to choose, in the sense that acceptable performance may be achieved on a wide range of tasks using the same secondary hyperparameters for all tasks. 11. 4. 3 grid search when there are three or fewer hyperparameters, the common practice is to perform grid search. for each hyperparameter, the user selects a small finite set of values to explore. the grid search algorithm then trains a model for every joint specification of hyperparameter values in the cartesian product of the set of values for each individual hyperparameter. the experiment that yields the best validation 432
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 447
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 11. practical methodology grid random figure 11. 2 : comparison of grid search and random search. for illustration purposes we display two hyperparameters but we are typically interested in having many more. ( left ) to perform grid search, we provide a set of values for each hyperparameter. the search algorithm runs training for every joint hyperparameter setting in the cross product of these sets. to perform random search, we provide a probability distribution over joint ( right ) hyperparameter configurations. usually most of these hyperparameters are independent from each other. common choices for the distribution over a single hyperparameter include uniform and log - uniform ( to sample from a log - uniform distribution, take theexp of a sample from a uniform distribution ). the search algorithm then randomly samples joint hyperparameter configurations and runs training with each of them. both grid search and random search evaluate the validation set error and return the best configuration. the figure illustrates the typical case where only some hyperparameters have a significant influence on the result. in this illustration, only the hyperparameter on the horizontal axis has a significant [UNK]. grid search wastes an
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 448
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
case where only some hyperparameters have a significant influence on the result. in this illustration, only the hyperparameter on the horizontal axis has a significant [UNK]. grid search wastes an amount of computation that is exponential in the number of non - influential hyperparameters, while random search tests a unique value of every influential hyperparameter on nearly every trial. figure reproduced with permission from ( ). bergstra and bengio 2012 433
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 448
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 11. practical methodology set error is then chosen as having found the best hyperparameters. see the left of figure for an illustration of a grid of hyperparameter values. 11. 2 how should the lists of values to search over be chosen? in the case of numerical ( ordered ) hyperparameters, the smallest and largest element of each list is chosen conservatively, based on prior experience with similar experiments, to make sure that the optimal value is very likely to be in the selected range. typically, a grid search involves picking values approximately on a logarithmic scale, e. g., a learning rate taken within the set {. 1,. 01, 10−3, 10−4, 10−5 }, or a number of hidden units taken with the set. { } 50 100 200 500 1000 2000,,,,, grid search usually performs best when it is performed repeatedly. for example, suppose that we ran a grid search over a hyperparameter α using values of { −1, 0, 1 }. if the best value found is, then we underestimated the range in which the best 1 α lies and we should shift the grid and run another search with α in, for example, {
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 449
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
values of { −1, 0, 1 }. if the best value found is, then we underestimated the range in which the best 1 α lies and we should shift the grid and run another search with α in, for example, { 1, 2, 3 }. if we find that the best value of α is, then we may wish to refine our 0 estimate by zooming in and running a grid search over. { − }.,,. 1 0 1 the obvious problem with grid search is that its computational cost grows exponentially with the number of hyperparameters. if there are m hyperparameters, each taking at most n values, then the number of training and evaluation trials required grows as o ( nm ). the trials may be run in parallel and exploit loose parallelism ( with almost no need for communication between [UNK] machines carrying out the search ) unfortunately, due to the exponential cost of grid search, even parallelization may not provide a satisfactory size of search. 11. 4. 4 random search fortunately, there is an alternative to grid search that is as simple to program, more convenient to use, and converges much faster to good values of the hyperparameters : random search (, ). berg
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 449
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
11. 4. 4 random search fortunately, there is an alternative to grid search that is as simple to program, more convenient to use, and converges much faster to good values of the hyperparameters : random search (, ). bergstra and bengio 2012 a random search proceeds as follows. first we define a marginal distribution for each hyperparameter, e. g., a bernoulli or multinoulli for binary or discrete hyperparameters, or a uniform distribution on a log - scale for positive real - valued hyperparameters. for example, log learning rate _ _ [UNK] − − u ( 1, 5 ) ( 11. 2 ) learning rate _ = 10log learning rate _ _. ( 11. 3 ) where u ( a, b ) indicates a sample of the uniform distribution in the interval ( a, b ). similarly the log number of hidden units _ _ _ _ may be sampled from u ( log ( 50 ), log ( 2000 ) ). 434
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 449
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 11. practical methodology unlike in the case of a grid search, one should not discretize or bin the values of the hyperparameters. this allows one to explore a larger set of values, and does not incur additional computational cost. in fact, as illustrated in figure, a 11. 2 random search can be exponentially more [UNK] than a grid search, when there are several hyperparameters that do not strongly [UNK] the performance measure. this is studied at length in ( ), who found that random bergstra and bengio 2012 search reduces the validation set error much faster than grid search, in terms of the number of trials run by each method. as with grid search, one may often want to run repeated versions of random search, to refine the search based on the results of the first run. the main reason why random search finds good solutions faster than grid search is that there are no wasted experimental runs, unlike in the case of grid search, when two values of a hyperparameter ( given values of the other hyperparameters ) would give the same result. in the case of grid search, the other hyperparameters would have the same values for these two runs, whereas with random search, they
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 450
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
a hyperparameter ( given values of the other hyperparameters ) would give the same result. in the case of grid search, the other hyperparameters would have the same values for these two runs, whereas with random search, they would usually have [UNK] values. hence if the change between these two values does not marginally make much [UNK] in terms of validation set error, grid search will unnecessarily repeat two equivalent experiments while random search will still give two independent explorations of the other hyperparameters. 11. 4. 5 model - based hyperparameter optimization the search for good hyperparameters can be cast as an optimization problem. the decision variables are the hyperparameters. the cost to be optimized is the validation set error that results from training using these hyperparameters. in simplified settings where it is feasible to compute the gradient of some [UNK] error measure on the validation set with respect to the hyperparameters, we can simply follow this gradient (, ;, ;, bengio et al. 1999 bengio 2000 maclaurin et al. 2015 ). unfortunately, in most practical settings, this gradient is unavailable, either due to its high computation and memory cost, or due
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 450
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
this gradient (, ;, ;, bengio et al. 1999 bengio 2000 maclaurin et al. 2015 ). unfortunately, in most practical settings, this gradient is unavailable, either due to its high computation and memory cost, or due to hyperparameters having intrinsically non - [UNK] interactions with the validation set error, as in the case of discrete - valued hyperparameters. to compensate for this lack of a gradient, we can build a model of the validation set error, then propose new hyperparameter guesses by performing optimization within this model. most model - based algorithms for hyperparameter search use a bayesian regression model to estimate both the expected value of the validation set error for each hyperparameter and the uncertainty around this expectation. opti - mization thus involves a [UNK] exploration ( proposing hyperparameters 435
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 450
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 11. practical methodology for which there is high uncertainty, which may lead to a large improvement but may also perform poorly ) and exploitation ( proposing hyperparameters which the model is confident will perform as well as any hyperparameters it has seen so far — usually hyperparameters that are very similar to ones it has seen before ). contemporary approaches to hyperparameter optimization include spearmint (, ), snoek et al. 2012 tpe (, ) and smac (, ). bergstra et al. 2011 hutter et al. 2011 currently, we cannot unambiguously recommend bayesian hyperparameter optimization as an established tool for achieving better deep learning results or for obtaining those results with less [UNK]. bayesian hyperparameter optimization sometimes performs comparably to human experts, sometimes better, but fails catastrophically on other problems. it may be worth trying to see if it works on a particular problem but is not yet [UNK] mature or reliable. that being said, hyperparameter optimization is an important field of research that, while often driven primarily by the needs of deep learning, holds the potential to benefit not only the entire field of machine learning but the discipline of engineering in general. one
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 451
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
hyperparameter optimization is an important field of research that, while often driven primarily by the needs of deep learning, holds the potential to benefit not only the entire field of machine learning but the discipline of engineering in general. one drawback common to most hyperparameter optimization algorithms with more sophistication than random search is that they require for a training ex - periment to run to completion before they are able to extract any information from the experiment. this is much less [UNK], in the sense of how much infor - mation can be gleaned early in an experiment, than manual search by a human practitioner, since one can usually tell early on if some set of hyperparameters is completely pathological. ( ) have introduced an early version swersky et al. 2014 of an algorithm that maintains a set of multiple experiments. at various time points, the hyperparameter optimization algorithm can choose to begin a new experiment, to “ freeze ” a running experiment that is not promising, or to “ thaw ” and resume an experiment that was earlier frozen but now appears promising given more information. 11. 5 debugging strategies when a machine learning system performs poorly, it is usually [UNK] to tell whether the poor performance is
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 451
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
promising, or to “ thaw ” and resume an experiment that was earlier frozen but now appears promising given more information. 11. 5 debugging strategies when a machine learning system performs poorly, it is usually [UNK] to tell whether the poor performance is intrinsic to the algorithm itself or whether there is a bug in the implementation of the algorithm. machine learning systems are [UNK] to debug for a variety of reasons. in most cases, we do not know a priori what the intended behavior of the algorithm is. in fact, the entire point of using machine learning is that it will discover useful behavior that we were not able to specify ourselves. if we train a 436
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 451
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 11. practical methodology neural network on a classification task and it achieves 5 % test error, we have new no straightforward way of knowing if this is the expected behavior or sub - optimal behavior. a further [UNK] is that most machine learning models have multiple parts that are each adaptive. if one part is broken, the other parts can adapt and still achieve roughly acceptable performance. for example, suppose that we are training a neural net with several layers parametrized by weights w and biases b. suppose further that we have manually implemented the gradient descent rule for each parameter separately, and we made an error in the update for the biases : b b ← −α ( 11. 4 ) where α is the learning rate. this erroneous update does not use the gradient at all. it causes the biases to constantly become negative throughout learning, which is clearly not a correct implementation of any reasonable learning algorithm. the bug may not be apparent just from examining the output of the model though. depending on the distribution of the input, the weights may be able to adapt to compensate for the negative biases. most debugging strategies for neural nets are designed to get around one or both of these two [UNK]. either we design a case that is so simple
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 452
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
distribution of the input, the weights may be able to adapt to compensate for the negative biases. most debugging strategies for neural nets are designed to get around one or both of these two [UNK]. either we design a case that is so simple that the correct behavior actually can be predicted, or we design a test that exercises one part of the neural net implementation in isolation. some important debugging tests include : visualize the model in action : when training a model to detect objects in images, view some images with the detections proposed by the model displayed superimposed on the image. when training a generative model of speech, listen to some of the speech samples it produces. this may seem obvious, but it is easy to fall into the practice of only looking at quantitative performance measurements like accuracy or log - likelihood. directly observing the machine learning model performing its task will help to determine whether the quantitative performance numbers it achieves seem reasonable. evaluation bugs can be some of the most devastating bugs because they can mislead you into believing your system is performing well when it is not. visualize the worst mistakes : most models are able to output some sort of confidence measure for the task they perform. for example, classifiers based on a
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 452
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##d you into believing your system is performing well when it is not. visualize the worst mistakes : most models are able to output some sort of confidence measure for the task they perform. for example, classifiers based on a softmax output layer assign a probability to each class. the probability assigned to the most likely class thus gives an estimate of the confidence the model has in its classification decision. typically, maximum likelihood training results in these values being overestimates rather than accurate probabilities of correct prediction, 437
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 452
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 11. practical methodology but they are somewhat useful in the sense that examples that are actually less likely to be correctly labeled receive smaller probabilities under the model. by viewing the training set examples that are the hardest to model correctly, one can often discover problems with the way the data has been preprocessed or labeled. for example, the street view transcription system originally had a problem where the address number detection system would crop the image too tightly and omit some of the digits. the transcription network then assigned very low probability to the correct answer on these images. sorting the images to identify the most confident mistakes showed that there was a systematic problem with the cropping. modifying the detection system to crop much wider images resulted in much better performance of the overall system, even though the transcription network needed to be able to process greater variation in the position and scale of the address numbers. reasoning about software using train and test error : it is often [UNK] to determine whether the underlying software is correctly implemented. some clues can be obtained from the train and test error. if training error is low but test error is high, then it is likely that that the training procedure works correctly, and the model is overfitting for fundamental algorithmic reasons. an alternative possibility is that the
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 453
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
obtained from the train and test error. if training error is low but test error is high, then it is likely that that the training procedure works correctly, and the model is overfitting for fundamental algorithmic reasons. an alternative possibility is that the test error is measured incorrectly due to a problem with saving the model after training then reloading it for test set evaluation, or if the test data was prepared [UNK] from the training data. if both train and test error are high, then it is [UNK] to determine whether there is a software defect or whether the model is underfitting due to fundamental algorithmic reasons. this scenario requires further tests, described next. fit a tiny dataset : if you have high error on the training set, determine whether it is due to genuine underfitting or due to a software defect. usually even small models can be guaranteed to be able fit a [UNK] small dataset. for example, a classification dataset with only one example can be fit just by setting the biases of the output layer correctly. usually if you cannot train a classifier to correctly label a single example, an autoencoder to successfully reproduce a single example with high fidelity, or a generative model to consistently emit
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 453
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##es of the output layer correctly. usually if you cannot train a classifier to correctly label a single example, an autoencoder to successfully reproduce a single example with high fidelity, or a generative model to consistently emit samples resembling a single example, there is a software defect preventing successful optimization on the training set. this test can be extended to a small dataset with few examples. compare back - propagated derivatives to numerical derivatives : if you are using a software framework that requires you to implement your own gradient com - putations, or if you are adding a new operation to a [UNK] library and must define its bprop method, then a common source of error is implementing this gradient expression incorrectly. one way to verify that these derivatives are correct 438
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 453
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 11. practical methodology is to compare the derivatives computed by your implementation of automatic [UNK] to the derivatives computed by a. because finite [UNK] f ( ) = lim x →0 f x f x ( + ) − ( ), ( 11. 5 ) we can approximate the derivative by using a small, finite : f ( ) x ≈f x f x ( + ) − ( ). ( 11. 6 ) we can improve the accuracy of the approximation by using the centered [UNK] - ence : f ( ) x ≈f x ( + 1 2 f x ) − ( −1 2 ). ( 11. 7 ) the perturbation size must chosen to be large enough to ensure that the pertur - bation is not rounded down too much by finite - precision numerical computations. usually, we will want to test the gradient or jacobian of a vector - valued function g : rm →rn. unfortunately, finite [UNK] only allows us to take a single derivative at a time. we can either run finite [UNK] mn times to evaluate all of the partial derivatives of g, or we can apply the test to a new function that uses random projections at both the input and output of g. for example, we
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 454
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
a time. we can either run finite [UNK] mn times to evaluate all of the partial derivatives of g, or we can apply the test to a new function that uses random projections at both the input and output of g. for example, we can apply our test of the implementation of the derivatives to f ( x ) where f ( x ) = ut g ( vx ), where u and v are randomly chosen vectors. computing f ( x ) correctly requires being able to back - propagate through g correctly, yet is [UNK] to do with finite [UNK] because f has only a single input and a single output. it is usually a good idea to repeat this test for more than one value of u and v to reduce the chance that the test overlooks mistakes that are orthogonal to the random projection. if one has access to numerical computation on complex numbers, then there is a very [UNK] way to numerically estimate the gradient by using complex numbers as input to the function ( squire and trapp 1998, ). the method is based on the observation that f x i f x if ( + ) = ( ) + ( ) + ( x o 2 ) ( 11. 8 ) real ( ( + ) ) = ( ) + ( f x i f
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 454
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
. the method is based on the observation that f x i f x if ( + ) = ( ) + ( ) + ( x o 2 ) ( 11. 8 ) real ( ( + ) ) = ( ) + ( f x i f x o 2 ) imag (, f x i ( + ) ) = f ( ) + ( x o 2 ), ( 11. 9 ) where i = √−1. unlike in the real - valued case above, there is no cancellation [UNK] due to taking the [UNK] between the value of f at [UNK] points. this allows the use of tiny values of like = 10 −150, which make the o ( 2 ) error insignificant for all practical purposes. 439
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 454
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.