text
stringlengths 35
1.54k
| source
stringclasses 1
value | page
int64 1
800
| book
stringclasses 1
value | chunk_index
int64 0
0
|
|---|---|---|---|---|
phenomena. we have not yet described precisely what these individual cells detect. in a deep, nonlinear network, it can be [UNK] to understand the function of individual cells. simple cells in the first layer are easier to analyze, because their responses are driven by a linear function. in an artificial neural network, we can just display an image of the convolution kernel to see what the corresponding channel of a convolutional layer responds to. in a biological neural network, we do not have access to the weights themselves. instead, we put an electrode in the neuron itself, display several samples of white noise images in front of the animal ’ s retina, and record how each of these samples causes the neuron to activate. we can then fit a linear model to these responses in order to obtain an approximation of the neuron ’ s weights. this approach is known as reverse correlation ( ringach and shapley 2004, ). reverse correlation shows us that most v1 cells have weights that are described by gabor functions. the gabor function describes the weight at a 2 - d point in the image. we can think of an image as being a function of 2 - d coordinates, i ( x, y ). likewise
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 383
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
weights that are described by gabor functions. the gabor function describes the weight at a 2 - d point in the image. we can think of an image as being a function of 2 - d coordinates, i ( x, y ). likewise, we can think of a simple cell as sampling the image at a set of locations, defined by a set of x coordinates x and a set of y coordinates, y, and applying weights that are also a function of the location, w ( x, y ). from this point of view, the response of a simple cell to an image is given by s i ( ) = x∈x y∈y w x, y i x, y. ( ) ( ) ( 9. 15 ) specifically, takes the form of a gabor function : w x, y ( ) w x, y α, β ( ; x, βy, f, φ, x0, y0, τ α ) = exp −βxx2 −βyy2 cos ( fx + ) φ, ( 9. 16 ) where x = ( x x − 0 ) cos ( ) + ( τ y y − 0 ) sin ( ) τ ( 9. 17 )
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 383
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##xx2 −βyy2 cos ( fx + ) φ, ( 9. 16 ) where x = ( x x − 0 ) cos ( ) + ( τ y y − 0 ) sin ( ) τ ( 9. 17 ) 368
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 383
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 9. convolutional networks and y = ( −x x − 0 ) sin ( ) + ( τ y y − 0 ) cos ( ) τ. ( 9. 18 ) here, α, βx, βy, f, φ, x0, y0, and τ are parameters that control the properties of the gabor function. figure shows some examples of gabor functions with 9. 18 [UNK] settings of these parameters. the parameters x0, y0, and τ define a coordinate system. we translate and rotate x and y to form xand y. specifically, the simple cell will respond to image features centered at the point ( x0, y 0 ), and it will respond to changes in brightness as we move along a line rotated radians from the horizontal. τ viewed as a function of xand y, the function w then responds to changes in brightness as we move along the xaxis. it has two important factors : one is a gaussian function and the other is a cosine function. the gaussian factor α exp −βxx2 −βyy2 can be seen as a gating term that ensures the simple cell will only respond to
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 384
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
is a gaussian function and the other is a cosine function. the gaussian factor α exp −βxx2 −βyy2 can be seen as a gating term that ensures the simple cell will only respond to values near where x and yare both zero, in other words, near the center of the cell ’ s receptive field. the scaling factor α adjusts the total magnitude of the simple cell ’ s response, while βx and β y control how quickly its receptive field falls [UNK]. the cosine factor cos ( fx + φ ) controls how the simple cell responds to changing brightness along the xaxis. the parameter f controls the frequency of the cosine and controls its phase [UNK]. φ altogether, this cartoon view of simple cells means that a simple cell responds to a specific spatial frequency of brightness in a specific direction at a specific location. simple cells are most excited when the wave of brightness in the image has the same phase as the weights. this occurs when the image is bright where the weights are positive and dark where the weights are negative. simple cells are most inhibited when the wave of brightness is fully out of phase with the weights —
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 384
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
the image has the same phase as the weights. this occurs when the image is bright where the weights are positive and dark where the weights are negative. simple cells are most inhibited when the wave of brightness is fully out of phase with the weights — when the image is dark where the weights are positive and bright where the weights are negative. the cartoon view of a complex cell is that it computes the l2 norm of the 2 - d vector containing two simple cells ’ responses : c ( i ) = s0 ( ) i 2 + s1 ( ) i 2. an important special case occurs when s1 has all of the same parameters as s0 except for φ, and φ is set such that s1 is one quarter cycle out of phase with s0. in this case, s0 and s1 form a quadrature pair. a complex cell defined in this way responds when the gaussian reweighted image i ( x, y ) exp ( −βxx2 −βyy 2 ) contains a high amplitude sinusoidal wave with frequency f in direction τ near ( x0, y0 ), regardless of the phase [UNK] of this wave. in other words, the complex cell is invariant to small translations of the
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 384
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##yy 2 ) contains a high amplitude sinusoidal wave with frequency f in direction τ near ( x0, y0 ), regardless of the phase [UNK] of this wave. in other words, the complex cell is invariant to small translations of the image in direction τ, or to negating the image 369
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 384
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 9. convolutional networks figure 9. 18 : gabor functions with a variety of parameter settings. white indicates large positive weight, black indicates large negative weight, and the background gray corresponds to zero weight. ( left ) gabor functions with [UNK] values of the parameters that control the coordinate system : x 0, y0, and τ. each gabor function in this grid is assigned a value of x0 and y0 proportional to its position in its grid, and τ is chosen so that each gabor filter is sensitive to the direction radiating out from the center of the grid. for the other two plots, x0, y 0, and τ are fixed to zero. gabor functions with ( center ) [UNK] gaussian scale parameters βx and βy. gabor functions are arranged in increasing width ( decreasing βx ) as we move left to right through the grid, and increasing height ( decreasing βy ) as we move top to bottom. for the other two plots, the β values are fixed to 1. 5× the image width. gabor functions with [UNK] sinusoid parameters ( right ) f and φ. as we move top to bottom, f increases, and as we move left to right, φ increases
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 385
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
β values are fixed to 1. 5× the image width. gabor functions with [UNK] sinusoid parameters ( right ) f and φ. as we move top to bottom, f increases, and as we move left to right, φ increases. for the other two plots, is fixed to 0 and is fixed to 5 the image width. φ f × ( replacing black with white and vice versa ). some of the most striking correspondences between neuroscience and machine learning come from visually comparing the features learned by machine learning models with those employed by v1. ( ) showed that olshausen and field 1996 a simple unsupervised learning algorithm, sparse coding, learns features with receptive fields similar to those of simple cells. since then, we have found that an extremely wide variety of statistical learning algorithms learn features with gabor - like functions when applied to natural images. this includes most deep learning algorithms, which learn these features in their first layer. figure 9. 19 shows some examples. because so many [UNK] learning algorithms learn edge detectors, it is [UNK] to conclude that any specific learning algorithm is the “ right ” model of the brain just based on the features that it learns ( though it can certainly be a bad
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 385
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
examples. because so many [UNK] learning algorithms learn edge detectors, it is [UNK] to conclude that any specific learning algorithm is the “ right ” model of the brain just based on the features that it learns ( though it can certainly be a bad sign if an algorithm does learn some sort of edge detector not when applied to natural images ). these features are an important part of the statistical structure of natural images and can be recovered by many [UNK] approaches to statistical modeling. see hyvarinen 2009 et al. ( ) for a review of the field of natural image statistics. 370
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 385
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 9. convolutional networks figure 9. 19 : many machine learning algorithms learn features that detect edges or specific colors of edges when applied to natural images. these feature detectors are reminiscent of the gabor functions known to be present in primary visual cortex. ( left ) weights learned by an unsupervised learning algorithm ( spike and slab sparse coding ) applied to small image patches. ( right ) convolution kernels learned by the first layer of a fully supervised convolutional maxout network. neighboring pairs of filters drive the same maxout unit. 9. 11 convolutional networks and the history of deep learning convolutional networks have played an important role in the history of deep learning. they are a key example of a successful application of insights obtained by studying the brain to machine learning applications. they were also some of the first deep models to perform well, long before arbitrary deep models were considered viable. convolutional networks were also some of the first neural networks to solve important commercial applications and remain at the forefront of commercial applications of deep learning today. for example, in the 1990s, the neural network research group at at & t developed a convolution
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 386
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
networks were also some of the first neural networks to solve important commercial applications and remain at the forefront of commercial applications of deep learning today. for example, in the 1990s, the neural network research group at at & t developed a convolutional network for reading checks (, ). by the end of the 1990s, this system deployed lecun et al. 1998b by nec was reading over 10 % of all the checks in the us. later, several ocr and handwriting recognition systems based on convolutional nets were deployed by microsoft (, ). see chapter for more details on such applications simard et al. 2003 12 and more modern applications of convolutional networks. see ( ) lecun et al. 2010 for a more in - depth history of convolutional networks up to 2010. convolutional networks were also used to win many contests. the current intensity of commercial interest in deep learning began when krizhevsky et al. ( ) won the imagenet object recognition challenge, but convolutional networks 2012 371
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 386
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 9. convolutional networks had been used to win other machine learning and computer vision contests with less impact for years earlier. convolutional nets were some of the first working deep networks trained with back - propagation. it is not entirely clear why convolutional networks succeeded when general back - propagation networks were considered to have failed. it may simply be that convolutional networks were more computationally [UNK] than fully connected networks, so it was easier to run multiple experiments with them and tune their implementation and hyperparameters. larger networks also seem to be easier to train. with modern hardware, large fully connected networks appear to perform reasonably on many tasks, even when using datasets that were available and activation functions that were popular during the times when fully connected networks were believed not to work well. it may be that the primary barriers to the success of neural networks were psychological ( practitioners did not expect neural networks to work, so they did not make a serious [UNK] to use neural networks ). whatever the case, it is fortunate that convolutional networks performed well decades ago. in many ways, they carried the torch for the rest of deep learning and paved the way to the acceptance of neural networks in general. con
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 387
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
networks ). whatever the case, it is fortunate that convolutional networks performed well decades ago. in many ways, they carried the torch for the rest of deep learning and paved the way to the acceptance of neural networks in general. convolutional networks provide a way to specialize neural networks to work with data that has a clear grid - structured topology and to scale such models to very large size. this approach has been the most successful on a two - dimensional, image topology. to process one - dimensional, sequential data, we turn next to another powerful specialization of the neural networks framework : recurrent neural networks. 372
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 387
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 10 sequence modeling : recurrent and recursive nets recurrent neural networks or rnns (, ) are a family of rumelhart et al. 1986a neural networks for processing sequential data. much as a convolutional network is a neural network that is specialized for processing a grid of values x such as an image, a recurrent neural network is a neural network that is specialized for processing a sequence of values x ( 1 ),..., x ( ) τ. just as convolutional networks can readily scale to images with large width and height, and some convolutional networks can process images of variable size, recurrent networks can scale to much longer sequences than would be practical for networks without sequence - based specialization. most recurrent networks can also process sequences of variable length. to go from multi - layer networks to recurrent networks, we need to take advan - tage of one of the early ideas found in machine learning and statistical models of the 1980s : sharing parameters across [UNK] parts of a model. parameter sharing makes it possible to extend and apply the model to examples of [UNK] forms ( [UNK] lengths, here ) and generalize across them. if we had separate parameters for each value of the time index,
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 388
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
sharing parameters across [UNK] parts of a model. parameter sharing makes it possible to extend and apply the model to examples of [UNK] forms ( [UNK] lengths, here ) and generalize across them. if we had separate parameters for each value of the time index, we could not generalize to sequence lengths not seen during training, nor share statistical strength across [UNK] sequence lengths and across [UNK] positions in time. such sharing is particularly important when a specific piece of information can occur at multiple positions within the sequence. for example, consider the two sentences “ i went to nepal in 2009 ” and “ in 2009, i went to nepal. ” if we ask a machine learning model to read each sentence and extract the year in which the narrator went to nepal, we would like it to recognize the year 2009 as the relevant piece of information, whether it appears in the sixth 373
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 388
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 10. sequence modeling : recurrent and recursive nets word or the second word of the sentence. suppose that we trained a feedforward network that processes sentences of fixed length. a traditional fully connected feedforward network would have separate parameters for each input feature, so it would need to learn all of the rules of the language separately at each position in the sentence. by comparison, a recurrent neural network shares the same weights across several time steps. a related idea is the use of convolution across a 1 - d temporal sequence. this convolutional approach is the basis for time - delay neural networks ( lang and hinton 1988 waibel 1989 lang 1990, ; et al., ; et al., ). the convolution operation allows a network to share parameters across time, but is shallow. the output of convolution is a sequence where each member of the output is a function of a small number of neighboring members of the input. the idea of parameter sharing manifests in the application of the same convolution kernel at each time step. recurrent networks share parameters in a [UNK] way. each member of the output is a function of the previous members of the output. each member of the output
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 389
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
sharing manifests in the application of the same convolution kernel at each time step. recurrent networks share parameters in a [UNK] way. each member of the output is a function of the previous members of the output. each member of the output is produced using the same update rule applied to the previous outputs. this recurrent formulation results in the sharing of parameters through a very deep computational graph. for the simplicity of exposition, we refer to rnns as operating on a sequence that contains vectors x ( ) t with the time step index t ranging from to 1 τ. in practice, recurrent networks usually operate on minibatches of such sequences, with a [UNK] sequence length τ for each member of the minibatch. we have omitted the minibatch indices to simplify notation. moreover, the time step index need not literally refer to the passage of time in the real world. sometimes it refers only to the position in the sequence. rnns may also be applied in two dimensions across spatial data such as images, and even when applied to data involving time, the network may have connections that go backwards in time, provided that the entire sequence is observed before it is provided to the network. this chapter extends the idea of a computational graph to include cycles. these cycles
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 389
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
and even when applied to data involving time, the network may have connections that go backwards in time, provided that the entire sequence is observed before it is provided to the network. this chapter extends the idea of a computational graph to include cycles. these cycles represent the influence of the present value of a variable on its own value at a future time step. such computational graphs allow us to define recurrent neural networks. we then describe many [UNK] ways to construct, train, and use recurrent neural networks. for more information on recurrent neural networks than is available in this chapter, we refer the reader to the textbook of graves 2012 ( ). 374
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 389
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 10. sequence modeling : recurrent and recursive nets 10. 1 unfolding computational graphs a computational graph is a way to formalize the structure of a set of computations, such as those involved in mapping inputs and parameters to outputs and loss. please refer to section for a general introduction. in this section we explain 6. 5. 1 the idea of unfolding a recursive or recurrent computation into a computational graph that has a repetitive structure, typically corresponding to a chain of events. unfolding this graph results in the sharing of parameters across a deep network structure. for example, consider the classical form of a dynamical system : s ( ) t = ( f s ( 1 ) t− ; ) θ, ( 10. 1 ) where s ( ) t is called the state of the system. equation is recurrent because the definition of 10. 1 s at time t refers back to the same definition at time. t −1 for a finite number of time steps τ, the graph can be unfolded by applying the definition τ −1 times. for example, if we unfold equation for 10. 1 τ = 3 time steps, we obtain s ( 3 ) = ( f s (
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 390
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
time steps τ, the graph can be unfolded by applying the definition τ −1 times. for example, if we unfold equation for 10. 1 τ = 3 time steps, we obtain s ( 3 ) = ( f s ( 2 ) ; ) θ ( 10. 2 ) = ( ( f f s ( 1 ) ; ) ; ) θ θ ( 10. 3 ) unfolding the equation by repeatedly applying the definition in this way has yielded an expression that does not involve recurrence. such an expression can now be represented by a traditional directed acyclic computational graph. the unfolded computational graph of equation and equation is illustrated in 10. 1 10. 3 figure. 10. 1 s ( t−1 ) s ( t−1 ) s ( ) t s ( ) t s ( + 1 ) t s ( + 1 ) t f s ( )... s ( )... s ( )... s ( )... f f f figure 10. 1 : the classical dynamical system described by equation, illustrated as an 10. 1 unfolded computational graph. each node represents the state at some timet and the function f maps the state at t to the state at t + 1
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 390
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
f figure 10. 1 : the classical dynamical system described by equation, illustrated as an 10. 1 unfolded computational graph. each node represents the state at some timet and the function f maps the state at t to the state at t + 1. the same parameters ( the same value of used to parametrize ) are used for all time steps. θ f as another example, let us consider a dynamical system driven by an external signal x ( ) t, s ( ) t = ( f s ( 1 ) t−, x ( ) t ; ) θ, ( 10. 4 ) 375
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 390
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 10. sequence modeling : recurrent and recursive nets where we see that the state now contains information about the whole past sequence. recurrent neural networks can be built in many [UNK] ways. much as almost any function can be considered a feedforward neural network, essentially any function involving recurrence can be considered a recurrent neural network. many recurrent neural networks use equation or a similar equation to 10. 5 define the values of their hidden units. to indicate that the state is the hidden units of the network, we now rewrite equation using the variable 10. 4 h to represent the state : h ( ) t = ( f h ( 1 ) t−, x ( ) t ; ) θ, ( 10. 5 ) illustrated in figure, typical rnns will add extra architectural features such 10. 2 as output layers that read information out of the state to make predictions. h when the recurrent network is trained to perform a task that requires predicting the future from the past, the network typically learns to use h ( ) t as a kind of lossy summary of the task - relevant aspects of the past sequence of inputs up to t. this summary is in general necessarily lossy, since it maps an arbitrary length sequence (
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 391
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
the network typically learns to use h ( ) t as a kind of lossy summary of the task - relevant aspects of the past sequence of inputs up to t. this summary is in general necessarily lossy, since it maps an arbitrary length sequence ( x ( ) t, x ( 1 ) t−, x ( 2 ) t−,..., x ( 2 ), x ( 1 ) ) to a fixed length vector h ( ) t. depending on the training criterion, this summary might selectively keep some aspects of the past sequence with more precision than other aspects. for example, if the rnn is used in statistical language modeling, typically to predict the next word given previous words, it may not be necessary to store all of the information in the input sequence up to time t, but rather only enough information to predict the rest of the sentence. the most demanding situation is when we ask h ( ) t to be rich enough to allow one to approximately recover the input sequence, as in autoencoder frameworks ( chapter ). 14 f h x h ( t−1 ) h ( t−1 ) h ( ) t h ( ) t h ( + 1 ) t h ( + 1 ) t x ( t−1 ) x ( t
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 391
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##coder frameworks ( chapter ). 14 f h x h ( t−1 ) h ( t−1 ) h ( ) t h ( ) t h ( + 1 ) t h ( + 1 ) t x ( t−1 ) x ( t−1 ) x ( ) t x ( ) t x ( + 1 ) t x ( + 1 ) t h ( )... h ( )... h ( )... h ( )... f unfold f f f figure 10. 2 : a recurrent network with no outputs. this recurrent network just processes information from the input x by incorporating it into the state h that is passed forward through time. ( left ) circuit diagram. the black square indicates a delay of a single time step. the same network seen as an unfolded computational graph, where each ( right ) node is now associated with one particular time instance. equation can be drawn in two [UNK] ways. one way to draw the rnn 10. 5 is with a diagram containing one node for every component that might exist in a 376
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 391
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 10. sequence modeling : recurrent and recursive nets physical implementation of the model, such as a biological neural network. in this view, the network defines a circuit that operates in real time, with physical parts whose current state can influence their future state, as in the left of figure. 10. 2 throughout this chapter, we use a black square in a circuit diagram to indicate that an interaction takes place with a delay of a single time step, from the state at time t to the state at time t + 1. the other way to draw the rnn is as an unfolded computational graph, in which each component is represented by many [UNK] variables, with one variable per time step, representing the state of the component at that point in time. each variable for each time step is drawn as a separate node of the computational graph, as in the right of figure. what we 10. 2 call unfolding is the operation that maps a circuit as in the left side of the figure to a computational graph with repeated pieces as in the right side. the unfolded graph now has a size that depends on the sequence length. we can represent the unfolded recurrence after steps with a function t g ( ) t :
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 392
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
the figure to a computational graph with repeated pieces as in the right side. the unfolded graph now has a size that depends on the sequence length. we can represent the unfolded recurrence after steps with a function t g ( ) t : h ( ) t = g ( ) t ( x ( ) t, x ( 1 ) t−, x ( 2 ) t−,..., x ( 2 ), x ( 1 ) ) ( 10. 6 ) = ( f h ( 1 ) t−, x ( ) t ; ) θ ( 10. 7 ) the function g ( ) t takes the whole past sequence ( x ( ) t, x ( 1 ) t−, x ( 2 ) t−,..., x ( 2 ), x ( 1 ) ) as input and produces the current state, but the unfolded recurrent structure allows us to factorize g ( ) t into repeated application of a function f. the unfolding process thus introduces two major advantages : 1. regardless of the sequence length, the learned model always has the same input size, because it is specified in terms of transition from one state to another state, rather than specified in terms of a variable - length history
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 392
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
1. regardless of the sequence length, the learned model always has the same input size, because it is specified in terms of transition from one state to another state, rather than specified in terms of a variable - length history of states. 2. it is possible to use the transition function same f with the same parameters at every time step. these two factors make it possible to learn a single model f that operates on all time steps and all sequence lengths, rather than needing to learn a separate model g ( ) t for all possible time steps. learning a single, shared model allows generalization to sequence lengths that did not appear in the training set, and allows the model to be estimated with far fewer training examples than would be required without parameter sharing. both the recurrent graph and the unrolled graph have their uses. the recurrent graph is succinct. the unfolded graph provides an explicit description of which computations to perform. the unfolded graph also helps to illustrate the idea of 377
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 392
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 10. sequence modeling : recurrent and recursive nets information flow forward in time ( computing outputs and losses ) and backward in time ( computing gradients ) by explicitly showing the path along which this information flows. 10. 2 recurrent neural networks armed with the graph unrolling and parameter sharing ideas of section, we 10. 1 can design a wide variety of recurrent neural networks. u v w o ( t−1 ) o ( t−1 ) h o y l x o ( ) t o ( ) t o ( + 1 ) t o ( + 1 ) t l ( t−1 ) l ( t−1 ) l ( ) t l ( ) t l ( + 1 ) t l ( + 1 ) t y ( t−1 ) y ( t−1 ) y ( ) t y ( ) t y ( + 1 ) t y ( + 1 ) t h ( t−1 ) h ( t−1 ) h ( ) t h ( ) t h ( + 1 ) t h ( + 1 ) t x ( t−1 ) x ( t−1 ) x ( ) t x ( ) t x ( + 1 ) t x ( + 1 ) t w w w w h ( )... h ( )...
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 393
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
+ 1 ) t x ( t−1 ) x ( t−1 ) x ( ) t x ( ) t x ( + 1 ) t x ( + 1 ) t w w w w h ( )... h ( )... h ( )... h ( )... v v v u u u unfold figure 10. 3 : the computational graph to compute the training loss of a recurrent network that maps an input sequence of x values to a corresponding sequence of output o values. a loss l measures how far each o is from the corresponding training targety. when using softmax outputs, we assume o is the unnormalized log probabilities. the lossl internally computes [UNK] = softmax ( o ) and compares this to the target y. the rnn has input to hidden connections parametrized by a weight matrix u, hidden - to - hidden recurrent connections parametrized by a weight matrix w, and hidden - to - output connections parametrized by a weight matrix v. equation defines forward propagation in this model. 10. 8 ( left ) the rnn and its loss drawn with recurrent connections. ( right ) the same seen as an time - unfolded computational graph, where each
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 393
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
by a weight matrix v. equation defines forward propagation in this model. 10. 8 ( left ) the rnn and its loss drawn with recurrent connections. ( right ) the same seen as an time - unfolded computational graph, where each node is now associated with one particular time instance. some examples of important design patterns for recurrent neural networks include the following : 378
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 393
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 10. sequence modeling : recurrent and recursive nets • recurrent networks that produce an output at each time step and have recurrent connections between hidden units, illustrated in figure. 10. 3 • recurrent networks that produce an output at each time step and have recurrent connections only from the output at one time step to the hidden units at the next time step, illustrated in figure 10. 4 • recurrent networks with recurrent connections between hidden units, that read an entire sequence and then produce a single output, illustrated in figure. 10. 5 figure is a reasonably representative example that we return to throughout 10. 3 most of the chapter. the recurrent neural network of figure and equation is universal in the 10. 3 10. 8 sense that any function computable by a turing machine can be computed by such a recurrent network of a finite size. the output can be read from the rnn after a number of time steps that is asymptotically linear in the number of time steps used by the turing machine and asymptotically linear in the length of the input ( siegelmann and sontag 1991 siegelmann 1995 siegelmann and sontag 1995, ;, ;,
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 394
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##ymptotically linear in the number of time steps used by the turing machine and asymptotically linear in the length of the input ( siegelmann and sontag 1991 siegelmann 1995 siegelmann and sontag 1995, ;, ;, ; hyotyniemi 1996, ). the functions computable by a turing machine are discrete, so these results regard exact implementation of the function, not approximations. the rnn, when used as a turing machine, takes a binary sequence as input and its outputs must be discretized to provide a binary output. it is possible to compute all functions in this setting using a single specific rnn of finite size ( siegelmann and sontag 1995 ( ) use 886 units ). the “ input ” of the turing machine is a specification of the function to be computed, so the same network that simulates this turing machine is [UNK] for all problems. the theoretical rnn used for the proof can simulate an unbounded stack by representing its activations and weights with rational numbers of unbounded precision. we now develop the forward propagation equations for the rnn depicted in figure. the figure does not specify the choice of activation function for the 10
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 394
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##ed stack by representing its activations and weights with rational numbers of unbounded precision. we now develop the forward propagation equations for the rnn depicted in figure. the figure does not specify the choice of activation function for the 10. 3 hidden units. here we assume the hyperbolic tangent activation function. also, the figure does not specify exactly what form the output and loss function take. here we assume that the output is discrete, as if the rnn is used to predict words or characters. a natural way to represent discrete variables is to regard the output o as giving the unnormalized log probabilities of each possible value of the discrete variable. we can then apply the softmax operation as a post - processing step to obtain a vector [UNK] of normalized probabilities over the output. forward propagation begins with a specification of the initial state h ( 0 ). then, for each time step from 379
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 394
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 10. sequence modeling : recurrent and recursive nets u v w o ( t−1 ) o ( t−1 ) h o y l x o ( ) t o ( ) t o ( + 1 ) t o ( + 1 ) t l ( t−1 ) l ( t−1 ) l ( ) t l ( ) t l ( + 1 ) t l ( + 1 ) t y ( t−1 ) y ( t−1 ) y ( ) t y ( ) t y ( + 1 ) t y ( + 1 ) t h ( t−1 ) h ( t−1 ) h ( ) t h ( ) t h ( + 1 ) t h ( + 1 ) t x ( t−1 ) x ( t−1 ) x ( ) t x ( ) t x ( + 1 ) t x ( + 1 ) t w w w w o ( )... o ( )... h ( )... h ( )... v v v u u u unfold figure 10. 4 : an rnn whose only recurrence is the feedback connection from the output to the hidden layer. at each time step t, the input is xt, the hidden layer activations are h ( ) t, the
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 395
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##fold figure 10. 4 : an rnn whose only recurrence is the feedback connection from the output to the hidden layer. at each time step t, the input is xt, the hidden layer activations are h ( ) t, the outputs are o ( ) t, the targets are y ( ) t and the loss is l ( ) t. ( left ) circuit diagram. ( right ) unfolded computational graph. such an rnn is less powerful ( can express a smaller set of functions ) than those in the family represented by figure. the rnn 10. 3 in figure can choose to put any information it wants about the past into its hidden 10. 3 representation h and transmit h to the future. the rnn in this figure is trained to put a specific output value into o, and o is the only information it is allowed to send to the future. there are no direct connections from h going forward. the previous h is connected to the present only indirectly, via the predictions it was used to produce. unless o is very high - dimensional and rich, it will usually lack important information from the past. this makes the rnn in this figure less powerful, but it may be easier to train
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 395
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
, via the predictions it was used to produce. unless o is very high - dimensional and rich, it will usually lack important information from the past. this makes the rnn in this figure less powerful, but it may be easier to train because each time step can be trained in isolation from the others, allowing greater parallelization during training, as described in section. 10. 2. 1 380
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 395
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 10. sequence modeling : recurrent and recursive nets t t τ = 1 to =, we apply the following update equations : a ( ) t = + b w h ( 1 ) t− + ux ( ) t ( 10. 8 ) h ( ) t = tanh ( a ( ) t ) ( 10. 9 ) o ( ) t = + c v h ( ) t ( 10. 10 ) [UNK] ( ) t = softmax ( o ( ) t ) ( 10. 11 ) where the parameters are the bias vectors b and c along with the weight matrices u, v and w, respectively for input - to - hidden, hidden - to - output and hidden - to - hidden connections. this is an example of a recurrent network that maps an input sequence to an output sequence of the same length. the total loss for a given sequence of values paired with a sequence of values would then be just x y the sum of the losses over all the time steps. for example, if l ( ) t is the negative log - likelihood of y ( ) t given x ( 1 ),..., x ( ) t, then l { x ( 1 ),..., x ( ) τ } {,
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 396
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
if l ( ) t is the negative log - likelihood of y ( ) t given x ( 1 ),..., x ( ) t, then l { x ( 1 ),..., x ( ) τ } {, y ( 1 ),..., y ( ) τ } ( 10. 12 ) = t l ( ) t ( 10. 13 ) = − t log pmodel y ( ) t | { x ( 1 ),..., x ( ) t }, ( 10. 14 ) where pmodel y ( ) t | { x ( 1 ),..., x ( ) t } is given by reading the entry for y ( ) t from the model ’ s output vector [UNK] ( ) t. computing the gradient of this loss function with respect to the parameters is an expensive operation. the gradient computation involves performing a forward propagation pass moving left to right through our illustration of the unrolled graph in figure, followed by a backward propagation pass 10. 3 moving right to left through the graph. the runtime is o ( τ ) and cannot be reduced by parallelization because the forward propagation graph is inherently sequential ; each time step may only be computed after the previous one
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 396
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
a backward propagation pass 10. 3 moving right to left through the graph. the runtime is o ( τ ) and cannot be reduced by parallelization because the forward propagation graph is inherently sequential ; each time step may only be computed after the previous one. states computed in the forward pass must be stored until they are reused during the backward pass, so the memory cost is also o ( τ ). the back - propagation algorithm applied to the unrolled graph with o ( τ ) cost is called back - propagation through time or bptt and is discussed further in section. the network with recurrence 10. 2. 2 between hidden units is thus very powerful but also expensive to train. is there an alternative? 10. 2. 1 teacher forcing and networks with output recurrence the network with recurrent connections only from the output at one time step to the hidden units at the next time step ( shown in figure ) is strictly less powerful 10. 4 381
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 396
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 10. sequence modeling : recurrent and recursive nets because it lacks hidden - to - hidden recurrent connections. for example, it cannot simulate a universal turing machine. because this network lacks hidden - to - hidden recurrence, it requires that the output units capture all of the information about the past that the network will use to predict the future. because the output units are explicitly trained to match the training set targets, they are unlikely to capture the necessary information about the past history of the input, unless the user knows how to describe the full state of the system and provides it as part of the training set targets. the advantage of eliminating hidden - to - hidden recurrence is that, for any loss function based on comparing the prediction at time t to the training target at time t, all the time steps are decoupled. training can thus be parallelized, with the gradient for each step t computed in isolation. there is no need to compute the output for the previous time step first, because the training set provides the ideal value of that output. h ( t−1 ) h ( t−1 ) w h ( ) t h ( ) t...... x ( t−1 ) x ( t−1 ) x ( ) t
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 397
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
the training set provides the ideal value of that output. h ( t−1 ) h ( t−1 ) w h ( ) t h ( ) t...... x ( t−1 ) x ( t−1 ) x ( ) t x ( ) t x ( )... x ( )... w w u u u h ( ) τ h ( ) τ x ( ) τ x ( ) τ w u o ( ) τ o ( ) τ y ( ) τ y ( ) τ l ( ) τ l ( ) τ v...... figure 10. 5 : time - unfolded recurrent neural network with a single output at the end of the sequence. such a network can be used to summarize a sequence and produce a fixed - size representation used as input for further processing. there might be a target right at the end ( as depicted here ) or the gradient on the output o ( ) t can be obtained by back - propagating from further downstream modules. models that have recurrent connections from their outputs leading back into the model may be trained with teacher forcing. teacher forcing is a procedure that emerges from the maximum likelihood criterion, in which during training the model receives the ground truth output y ( )
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 397
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
modules. models that have recurrent connections from their outputs leading back into the model may be trained with teacher forcing. teacher forcing is a procedure that emerges from the maximum likelihood criterion, in which during training the model receives the ground truth output y ( ) t as input at time t + 1. we can see this by examining a sequence with two time steps. the conditional maximum 382
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 397
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 10. sequence modeling : recurrent and recursive nets o ( t−1 ) o ( t−1 ) o ( ) t o ( ) t h ( t−1 ) h ( t−1 ) h ( ) t h ( ) t x ( t−1 ) x ( t−1 ) x ( ) t x ( ) t w v v u u o ( t−1 ) o ( t−1 ) o ( ) t o ( ) t l ( t−1 ) l ( t−1 ) l ( ) t l ( ) t y ( t−1 ) y ( t−1 ) y ( ) t y ( ) t h ( t−1 ) h ( t−1 ) h ( ) t h ( ) t x ( t−1 ) x ( t−1 ) x ( ) t x ( ) t w v v u u train time test time figure 10. 6 : illustration of teacher forcing. teacher forcing is a training technique that is applicable to rnns that have connections from their output to their hidden states at the next time step. ( left ) at train time, we feed the correct outputy ( ) t drawn from the train set as input to h ( + 1 ) t. when the model is deployed, the true output is generally ( right ) not known
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 398
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
time step. ( left ) at train time, we feed the correct outputy ( ) t drawn from the train set as input to h ( + 1 ) t. when the model is deployed, the true output is generally ( right ) not known. in this case, we approximate the correct output y ( ) t with the model ’ s output o ( ) t, and feed the output back into the model. 383
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 398
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 10. sequence modeling : recurrent and recursive nets likelihood criterion is log p y ( 1 ), y ( 2 ) | x ( 1 ), x ( 2 ) ( 10. 15 ) = log p y ( 2 ) | y ( 1 ), x ( 1 ), x ( 2 ) + log p y ( 1 ) | x ( 1 ), x ( 2 ) ( 10. 16 ) in this example, we see that at time t = 2, the model is trained to maximize the conditional probability of y ( 2 ) given both the x sequence so far and the previous y value from the training set. maximum likelihood thus specifies that during training, rather than feeding the model ’ s own output back into itself, these connections should be fed with the target values specifying what the correct output should be. this is illustrated in figure. 10. 6 we originally motivated teacher forcing as allowing us to avoid back - propagation through time in models that lack hidden - to - hidden connections. teacher forcing may still be applied to models that have hidden - to - hidden connections so long as they have connections from the output at one time step to values computed in the next time step. however, as soon as the hidden units become
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 399
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
hidden connections. teacher forcing may still be applied to models that have hidden - to - hidden connections so long as they have connections from the output at one time step to values computed in the next time step. however, as soon as the hidden units become a function of earlier time steps, the bptt algorithm is necessary. some models may thus be trained with both teacher forcing and bptt. the disadvantage of strict teacher forcing arises if the network is going to be later used in an open - loop mode, with the network outputs ( or samples from the output distribution ) fed back as input. in this case, the kind of inputs that the network sees during training could be quite [UNK] from the kind of inputs that it will see at test time. one way to mitigate this problem is to train with both teacher - forced inputs and with free - running inputs, for example by predicting the correct target a number of steps in the future through the unfolded recurrent output - to - input paths. in this way, the network can learn to take into account input conditions ( such as those it generates itself in the free - running mode ) not seen during training and how to map the state back towards one that will make the network generate proper outputs after a few steps. another approach ( ben
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 399
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
to take into account input conditions ( such as those it generates itself in the free - running mode ) not seen during training and how to map the state back towards one that will make the network generate proper outputs after a few steps. another approach ( bengio et al., ) to mitigate the gap between the inputs seen at train time and the 2015b inputs seen at test time randomly chooses to use generated values or actual data values as input. this approach exploits a curriculum learning strategy to gradually use more of the generated values as input. 10. 2. 2 computing the gradient in a recurrent neural network computing the gradient through a recurrent neural network is straightforward. one simply applies the generalized back - propagation algorithm of section 6. 5. 6 384
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 399
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 10. sequence modeling : recurrent and recursive nets to the unrolled computational graph. no specialized algorithms are necessary. gradients obtained by back - propagation may then be used with any general - purpose gradient - based techniques to train an rnn. to gain some intuition for how the bptt algorithm behaves, we provide an example of how to compute gradients by bptt for the rnn equations above ( equation and equation ). the nodes of our computational graph include 10. 8 10. 12 the parameters u, v, w, b and c as well as the sequence of nodes indexed by t for x ( ) t, h ( ) t, o ( ) t and l ( ) t. for each node n we need to compute the gradient ∇nl recursively, based on the gradient computed at nodes that follow it in the graph. we start the recursion with the nodes immediately preceding the final loss ∂l ∂l ( ) t = 1. ( 10. 17 ) in this derivation we assume that the outputs o ( ) t are used as the argument to the softmax function to obtain the vector [UNK] of probabilities over the output. we also assume that the loss is the negative log - likelihood of the true
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 400
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
in this derivation we assume that the outputs o ( ) t are used as the argument to the softmax function to obtain the vector [UNK] of probabilities over the output. we also assume that the loss is the negative log - likelihood of the true target y ( ) t given the input so far. the gradient ∇o ( ) t l on the outputs at time step t, for all i, t, is as follows : ( ∇o ( ) t l ) i = ∂l ∂o ( ) t i = ∂l ∂l ( ) t ∂l ( ) t ∂o ( ) t i = [UNK] ( ) t i −1i, y ( ) t. ( 10. 18 ) we work our way backwards, starting from the end of the sequence. at the final time step, τ h ( ) τ only has o ( ) τ as a descendent, so its gradient is simple : ∇h ( ) τ l = v ∇o ( ) τ l. ( 10. 19 ) we can then iterate backwards in time to back - propagate gradients through time, from t = τ −1 down to t = 1, noting that h ( ) t ( for t < τ ) has as descendents both o
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 400
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
) we can then iterate backwards in time to back - propagate gradients through time, from t = τ −1 down to t = 1, noting that h ( ) t ( for t < τ ) has as descendents both o ( ) t and h ( + 1 ) t. its gradient is thus given by ∇h ( ) t l = ∂h ( + 1 ) t ∂h ( ) t ( ∇h ( + 1 ) t l ) + ∂o ( ) t ∂h ( ) t ( ∇o ( ) t l ) ( 10. 20 ) = w ( ∇h ( + 1 ) t l ) diag 1 − h ( + 1 ) t 2 + v ( ∇o ( ) t l ) ( 10. 21 ) where diag 1 − h ( + 1 ) t 2 indicates the diagonal matrix containing the elements 1 − ( h ( + 1 ) t i ) 2. this is the jacobian of the hyperbolic tangent associated with the hidden unit at time. i t + 1 385
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 400
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 10. sequence modeling : recurrent and recursive nets once the gradients on the internal nodes of the computational graph are obtained, we can obtain the gradients on the parameter nodes. because the parameters are shared across many time steps, we must take some care when denoting calculus operations involving these variables. the equations we wish to implement use the bprop method of section, that computes the contribution 6. 5. 6 of a single edge in the computational graph to the gradient. however, the ∇wf operator used in calculus takes into account the contribution of w to the value of f due to edges in the computational graph. to resolve this ambiguity, we all introduce dummy variables w ( ) t that are defined to be copies of w but with each w ( ) t used only at time step t. we may then use ∇w ( ) t to denote the contribution of the weights at time step to the gradient. t using this notation, the gradient on the remaining parameters is given by : ∇cl = t ∂o ( ) t ∂c ∇o ( ) t l = t ∇o ( ) t l ( 10. 22 ) ∇bl = t ∂h ( ) t ∂b ( ) t ∇h ( ) t l =
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 401
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##cl = t ∂o ( ) t ∂c ∇o ( ) t l = t ∇o ( ) t l ( 10. 22 ) ∇bl = t ∂h ( ) t ∂b ( ) t ∇h ( ) t l = t diag 1 − h ( ) t 2 ∇h ( ) t l ( 10. 23 ) ∇v l = t i ∂l ∂o ( ) t i ∇v o ( ) t i = t ( ∇o ( ) t l ) h ( ) t ( 10. 24 ) ∇wl = t i ∂l ∂h ( ) t i ∇w ( ) t h ( ) t i ( 10. 25 ) = t diag 1 − h ( ) t 2 ( ∇h ( ) t l ) h ( 1 ) t− ( 10. 26 ) ∇ul = t i ∂l ∂h ( ) t i ∇u ( ) t h ( ) t i ( 10. 27 ) = t diag 1 − h ( ) t 2 ( ∇h ( ) t l ) x ( ) t ( 10. 28 ) we do not need to compute the gradient with respect to x ( ) t for training because it does not have any parameters as ancestors in the computational graph de
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 401
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
t 2 ( ∇h ( ) t l ) x ( ) t ( 10. 28 ) we do not need to compute the gradient with respect to x ( ) t for training because it does not have any parameters as ancestors in the computational graph defining the loss. 386
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 401
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 10. sequence modeling : recurrent and recursive nets 10. 2. 3 recurrent networks as directed graphical models in the example recurrent network we have developed so far, the losses l ( ) t were cross - entropies between training targetsy ( ) t and outputs o ( ) t. as with a feedforward network, it is in principle possible to use almost any loss with a recurrent network. the loss should be chosen based on the task. as with a feedforward network, we usually wish to interpret the output of the rnn as a probability distribution, and we usually use the cross - entropy associated with that distribution to define the loss. mean squared error is the cross - entropy loss associated with an output distribution that is a unit gaussian, for example, just as with a feedforward network. when we use a predictive log - likelihood training objective, such as equa - tion, we train the rnn to estimate the conditional distribution of the next 10. 12 sequence element y ( ) t given the past inputs. this may mean that we maximize the log - likelihood log ( p y ( ) t | x ( 1 ),..., x ( ) t ), ( 10
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 402
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
next 10. 12 sequence element y ( ) t given the past inputs. this may mean that we maximize the log - likelihood log ( p y ( ) t | x ( 1 ),..., x ( ) t ), ( 10. 29 ) or, if the model includes connections from the output at one time step to the next time step, log ( p y ( ) t | x ( 1 ),..., x ( ) t, y ( 1 ),..., y ( 1 ) t− ). ( 10. 30 ) decomposing the joint probability over the sequence of y values as a series of one - step probabilistic predictions is one way to capture the full joint distribution across the whole sequence. when we do not feed past y values as inputs that condition the next step prediction, the directed graphical model contains no edges from any y ( ) i in the past to the current y ( ) t. in this case, the outputs y are conditionally independent given the sequence of x values. when we do feed the actual y values ( not their prediction, but the actual observed or generated values ) back into the network, the directed graphical model contains edges from all y ( ) i values in the past
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 402
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
independent given the sequence of x values. when we do feed the actual y values ( not their prediction, but the actual observed or generated values ) back into the network, the directed graphical model contains edges from all y ( ) i values in the past to the current y ( ) t value. as a simple example, let us consider the case where the rnn models only a sequence of scalar random variables y = { y ( 1 ),..., y ( ) τ }, with no additional inputs x. the input at time step t is simply the output at time step t −1. the rnn then defines a directed graphical model over the y variables. we parametrize the joint distribution of these observations using the chain rule ( equation ) for conditional 3. 6 probabilities : p p ( ) = y ( y ( 1 ),..., y ( ) τ ) = τ t = 1 p ( y ( ) t | y ( 1 ) t−, y ( 2 ) t−,..., y ( 1 ) ) ( 10. 31 ) where the right - hand side of the bar is empty for t = 1, of course. hence the negative log - likelihood of a set of values
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 402
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
( 2 ) t−,..., y ( 1 ) ) ( 10. 31 ) where the right - hand side of the bar is empty for t = 1, of course. hence the negative log - likelihood of a set of values { y ( 1 ),..., y ( ) τ } according to such a model 387
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 402
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 10. sequence modeling : recurrent and recursive nets y ( 1 ) y ( 1 ) y ( 2 ) y ( 2 ) y ( 3 ) y ( 3 ) y ( 4 ) y ( 4 ) y ( 5 ) y ( 5 ) y ( )... y ( )... figure 10. 7 : fully connected graphical model for a sequencey ( 1 ), y ( 2 ),..., y ( ) t,... : every past observation y ( ) i may influence the conditional distribution of some y ( ) t ( for t > i ), given the previous values. parametrizing the graphical model directly according to this graph ( as in equation ) might be very [UNK], with an ever growing number of 10. 6 inputs and parameters for each element of the sequence. rnns obtain the same full connectivity but [UNK] parametrization, as illustrated in figure. 10. 8 is l = t l ( ) t ( 10. 32 ) where l ( ) t = log ( − p y ( ) t = y ( ) t | y ( 1 ) t−, y ( 2 ) t−,..., y ( 1 ) ). ( 10
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 403
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
10. 32 ) where l ( ) t = log ( − p y ( ) t = y ( ) t | y ( 1 ) t−, y ( 2 ) t−,..., y ( 1 ) ). ( 10. 33 ) y ( 1 ) y ( 1 ) y ( 2 ) y ( 2 ) y ( 3 ) y ( 3 ) y ( 4 ) y ( 4 ) y ( 5 ) y ( 5 ) y ( )... y ( )... h ( 1 ) h ( 1 ) h ( 2 ) h ( 2 ) h ( 3 ) h ( 3 ) h ( 4 ) h ( 4 ) h ( 5 ) h ( 5 ) h ( )... h ( )... figure 10. 8 : introducing the state variable in the graphical model of the rnn, even though it is a deterministic function of its inputs, helps to see how we can obtain a very [UNK] parametrization, based on equation. every stage in the sequence ( for 10. 5 h ( ) t and y ( ) t ) involves the same structure ( the same number of inputs for each node ) and can share the same parameters with the other stages. the edges in a graphical
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 403
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
every stage in the sequence ( for 10. 5 h ( ) t and y ( ) t ) involves the same structure ( the same number of inputs for each node ) and can share the same parameters with the other stages. the edges in a graphical model indicate which variables depend directly on other variables. many graphical models aim to achieve statistical and computational [UNK] by omitting edges that do not correspond to strong interactions. for 388
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 403
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 10. sequence modeling : recurrent and recursive nets example, it is common to make the markov assumption that the graphical model should only contain edges from { y ( ) t k −,..., y ( 1 ) t− } to y ( ) t, rather than containing edges from the entire past history. however, in some cases, we believe that all past inputs should have an influence on the next element of the sequence. rnns are useful when we believe that the distribution over y ( ) t may depend on a value of y ( ) i from the distant past in a way that is not captured by the [UNK] of y ( ) i on y ( 1 ) t−. one way to interpret an rnn as a graphical model is to view the rnn as defining a graphical model whose structure is the complete graph, able to represent direct dependencies between any pair of y values. the graphical model over the y values with the complete graph structure is shown in figure. the complete 10. 7 graph interpretation of the rnn is based on ignoring the hidden units h ( ) t by marginalizing them out of the model. it is more interesting to consider the graphical model structure of rnns that results
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 404
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
figure. the complete 10. 7 graph interpretation of the rnn is based on ignoring the hidden units h ( ) t by marginalizing them out of the model. it is more interesting to consider the graphical model structure of rnns that results from regarding the hidden units h ( ) t as random variables. 1 including the hidden units in the graphical model reveals that the rnn provides a very [UNK] parametrization of the joint distribution over the observations. suppose that we represented an arbitrary joint distribution over discrete values with a tabular representation — an array containing a separate entry for each possible assignment of values, with the value of that entry giving the probability of that assignment occurring. if y can take on k [UNK] values, the tabular representation would have o ( kτ ) parameters. by comparison, due to parameter sharing, the number of parameters in the rnn is o ( 1 ) as a function of sequence length. the number of parameters in the rnn may be adjusted to control model capacity but is not forced to scale with sequence length. equation shows that the rnn parametrizes 10. 5 long - term relationships between variables [UNK], using recurrent applications of the same function f and same parameters θ at each time step. figure 10. 8 illustrates the
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 404
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
scale with sequence length. equation shows that the rnn parametrizes 10. 5 long - term relationships between variables [UNK], using recurrent applications of the same function f and same parameters θ at each time step. figure 10. 8 illustrates the graphical model interpretation. incorporating the h ( ) t nodes in the graphical model decouples the past and the future, acting as an intermediate quantity between them. a variable y ( ) i in the distant past may influence a variable y ( ) t via its [UNK] on h. the structure of this graph shows that the model can be [UNK] parametrized by using the same conditional probability distributions at each time step, and that when the variables are all observed, the probability of the joint assignment of all variables can be evaluated [UNK]. even with the [UNK] parametrization of the graphical model, some operations remain computationally challenging. for example, it is [UNK] to predict missing 1the conditional distribution over these variables given their parents is deterministic. this is perfectly legitimate, though it is somewhat rare to design a graphical model with such deterministic hidden units. 389
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 404
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 10. sequence modeling : recurrent and recursive nets values in the middle of the sequence. the price recurrent networks pay for their reduced number of parameters is that the parameters may be [UNK]. optimizing the parameter sharing used in recurrent networks relies on the assumption that the same parameters can be used for [UNK] time steps. equivalently, the assumption is that the conditional probability distribution over the variables at time t + 1 given the variables at time t is stationary, meaning that the relationship between the previous time step and the next time step does not depend on t. in principle, it would be possible to use t as an extra input at each time step and let the learner discover any time - dependence while sharing as much as it can between [UNK] time steps. this would already be much better than using a [UNK] conditional probability distribution for each t, but the network would then have to extrapolate when faced with new values of. t to complete our view of an rnn as a graphical model, we must describe how to draw samples from the model. the main operation that we need to perform is simply to sample from the conditional distribution at each time step. however, there is one additional complication. the rnn must have some mechanism for determining the length
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 405
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
describe how to draw samples from the model. the main operation that we need to perform is simply to sample from the conditional distribution at each time step. however, there is one additional complication. the rnn must have some mechanism for determining the length of the sequence. this can be achieved in various ways. in the case when the output is a symbol taken from a vocabulary, one can add a special symbol corresponding to the end of a sequence ( schmidhuber 2012, ). when that symbol is generated, the sampling process stops. in the training set, we insert this symbol as an extra member of the sequence, immediately after x ( ) τ in each training example. another option is to introduce an extra bernoulli output to the model that represents the decision to either continue generation or halt generation at each time step. this approach is more general than the approach of adding an extra symbol to the vocabulary, because it may be applied to any rnn, rather than only rnns that output a sequence of symbols. for example, it may be applied to an rnn that emits a sequence of real numbers. the new output unit is usually a sigmoid unit trained with the cross - entropy loss. in this approach the sigmoid is trained to maximize
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 405
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
for example, it may be applied to an rnn that emits a sequence of real numbers. the new output unit is usually a sigmoid unit trained with the cross - entropy loss. in this approach the sigmoid is trained to maximize the log - probability of the correct prediction as to whether the sequence ends or continues at each time step. another way to determine the sequence length τ is to add an extra output to the model that predicts the integer τ itself. the model can sample a value of τ and then sample τ steps worth of data. this approach requires adding an extra input to the recurrent update at each time step so that the recurrent update is aware of whether it is near the end of the generated sequence. this extra input can either consist of the value of τ or can consist of τ t −, the number of remaining 390
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 405
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 10. sequence modeling : recurrent and recursive nets time steps. without this extra input, the rnn might generate sequences that end abruptly, such as a sentence that ends before it is complete. this approach is based on the decomposition p ( x ( 1 ),..., x ( ) τ ) = ( ) ( p τ p x ( 1 ),..., x ( ) τ | τ. ) ( 10. 34 ) the strategy of predicting τ directly is used for example by goodfellow et al. ( ). 2014d 10. 2. 4 modeling sequences conditioned on context with rnns in the previous section we described how an rnn could correspond to a directed graphical model over a sequence of random variables y ( ) t with no inputs x. of course, our development of rnns as in equation included a sequence of 10. 8 inputs x ( 1 ), x ( 2 ),..., x ( ) τ. in general, rnns allow the extension of the graphical model view to represent not only a joint distribution over the y variables but also a conditional distribution over y given x. as discussed in the context of feedforward networks in section, any model representing a variable 6. 2. 1
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 406
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
extension of the graphical model view to represent not only a joint distribution over the y variables but also a conditional distribution over y given x. as discussed in the context of feedforward networks in section, any model representing a variable 6. 2. 1. 1 p ( y ; θ ) can be reinterpreted as a model representing a conditional distribution p ( y ω | ) with ω = θ. we can extend such a model to represent a distribution p ( y x | ) by using the same p ( y ω | ) as before, but making ω a function of x. in the case of an rnn, this can be achieved in [UNK] ways. we review here the most common and obvious choices. previously, we have discussed rnns that take a sequence of vectors x ( ) t for t = 1,..., τ as input. another option is to take only a single vector x as input. when x is a fixed - size vector, we can simply make it an extra input of the rnn that generates the y sequence. some common ways of providing an extra input to an rnn are : 1. as an extra input at each time step, or 2. as the initial state h ( 0 ), or 3.
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 406
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
of the rnn that generates the y sequence. some common ways of providing an extra input to an rnn are : 1. as an extra input at each time step, or 2. as the initial state h ( 0 ), or 3. both. the first and most common approach is illustrated in figure. the interaction 10. 9 between the input x and each hidden unit vector h ( ) t is parametrized by a newly introduced weight matrix r that was absent from the model of only the sequence of y values. the same product xr is added as additional input to the hidden units at every time step. we can think of the choice of x as determining the value 391
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 406
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 10. sequence modeling : recurrent and recursive nets of xr that is [UNK] a new bias parameter used for each of the hidden units. the weights remain independent of the input. we can think of this model as taking the parameters θ of the non - conditional model and turning them into ω, where the bias parameters within are now a function of the input. ω o ( t−1 ) o ( t−1 ) o ( ) t o ( ) t o ( + 1 ) t o ( + 1 ) t l ( t−1 ) l ( t−1 ) l ( ) t l ( ) t l ( + 1 ) t l ( + 1 ) t y ( t−1 ) y ( t−1 ) y ( ) t y ( ) t y ( + 1 ) t y ( + 1 ) t h ( t−1 ) h ( t−1 ) h ( ) t h ( ) t h ( + 1 ) t h ( + 1 ) t w w w w s ( )... s ( )... h ( )... h ( )... v v v u u u x y ( )... y ( )... r r r r r figure 10. 9 : an rnn
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 407
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
s ( )... h ( )... h ( )... v v v u u u x y ( )... y ( )... r r r r r figure 10. 9 : an rnn that maps a fixed - length vectorx into a distribution over sequences y. this rnn is appropriate for tasks such as image captioning, where a single image is used as input to a model that then produces a sequence of words describing the image. each element y ( ) t of the observed output sequence serves both as input ( for the current time step ) and, during training, as target ( for the previous time step ). rather than receiving only a single vector x as input, the rnn may receive a sequence of vectors x ( ) t as input. the rnn described in equation corre - 10. 8 sponds to a conditional distribution p ( y ( 1 ),..., y ( ) τ | x ( 1 ),..., x ( ) τ ) that makes a conditional independence assumption that this distribution factorizes as t p ( y ( ) t | x ( 1 ),..., x ( ) t ). ( 10. 35 ) to
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 407
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
..., x ( ) τ ) that makes a conditional independence assumption that this distribution factorizes as t p ( y ( ) t | x ( 1 ),..., x ( ) t ). ( 10. 35 ) to remove the conditional independence assumption, we can add connections from the output at time t to the hidden unit at time t + 1, as shown in figure. the 10. 10 model can then represent arbitrary probability distributions over the y sequence. this kind of model representing a distribution over a sequence given another 392
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 407
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 10. sequence modeling : recurrent and recursive nets o ( t−1 ) o ( t−1 ) o ( ) t o ( ) t o ( + 1 ) t o ( + 1 ) t l ( t−1 ) l ( t−1 ) l ( ) t l ( ) t l ( + 1 ) t l ( + 1 ) t y ( t−1 ) y ( t−1 ) y ( ) t y ( ) t y ( + 1 ) t y ( + 1 ) t h ( t−1 ) h ( t−1 ) h ( ) t h ( ) t h ( + 1 ) t h ( + 1 ) t w w w w h ( )... h ( )... h ( )... h ( )... v v v u u u x ( t−1 ) x ( t−1 ) r x ( ) t x ( ) t x ( + 1 ) t x ( + 1 ) t r r figure 10. 10 : a conditional recurrent neural network mapping a variable - length sequence of x values into a distribution over sequences of y values of the same length. compared to figure, this rnn contains connections from the previous output to the current state. 10. 3
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 408
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
conditional recurrent neural network mapping a variable - length sequence of x values into a distribution over sequences of y values of the same length. compared to figure, this rnn contains connections from the previous output to the current state. 10. 3 these connections allow this rnn to model an arbitrary distribution over sequences ofy given sequences of x of the same length. the rnn of figure is only able to represent 10. 3 distributions in which the y values are conditionally independent from each other given the values. x 393
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 408
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 10. sequence modeling : recurrent and recursive nets sequence still has one restriction, which is that the length of both sequences must be the same. we describe how to remove this restriction in section. 10. 4 o ( t−1 ) o ( t−1 ) o ( ) t o ( ) t o ( + 1 ) t o ( + 1 ) t l ( t−1 ) l ( t−1 ) l ( ) t l ( ) t l ( + 1 ) t l ( + 1 ) t y ( t−1 ) y ( t−1 ) y ( ) t y ( ) t y ( + 1 ) t y ( + 1 ) t h ( t−1 ) h ( t−1 ) h ( ) t h ( ) t h ( + 1 ) t h ( + 1 ) t x ( t−1 ) x ( t−1 ) x ( ) t x ( ) t x ( + 1 ) t x ( + 1 ) t g ( t−1 ) g ( t−1 ) g ( ) t g ( ) t g ( + 1 ) t g ( + 1 ) t figure 10. 11 : computation of a typical bidirectional recurrent neural network, meant to learn to map input sequences x to target sequences y,
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 409
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
( ) t g ( ) t g ( + 1 ) t g ( + 1 ) t figure 10. 11 : computation of a typical bidirectional recurrent neural network, meant to learn to map input sequences x to target sequences y, with loss l ( ) t at each step t. the h recurrence propagates information forward in time ( towards the right ) while the g recurrence propagates information backward in time ( towards the left ). thus at each point t, the output units o ( ) t can benefit from a relevant summary of the past in itsh ( ) t input and from a relevant summary of the future in its g ( ) t input. 10. 3 bidirectional rnns all of the recurrent networks we have considered up to now have a “ causal ” struc - ture, meaning that the state at time t only captures information from the past, x ( 1 ),..., x ( 1 ) t−, and the present input x ( ) t. some of the models we have discussed also allow information from past y values to [UNK] the current state when the y values are available. however, in many applications we want to output a prediction of y
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 409
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##−, and the present input x ( ) t. some of the models we have discussed also allow information from past y values to [UNK] the current state when the y values are available. however, in many applications we want to output a prediction of y ( ) t which may 394
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 409
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 10. sequence modeling : recurrent and recursive nets depend on the whole input sequence. for example, in speech recognition, the correct interpretation of the current sound as a phoneme may depend on the next few phonemes because of co - articulation and potentially may even depend on the next few words because of the linguistic dependencies between nearby words : if there are two interpretations of the current word that are both acoustically plausible, we may have to look far into the future ( and the past ) to disambiguate them. this is also true of handwriting recognition and many other sequence - to - sequence learning tasks, described in the next section. bidirectional recurrent neural networks ( or bidirectional rnns ) were invented to address that need ( schuster and paliwal 1997, ). they have been extremely suc - cessful ( graves 2012, ) in applications where that need arises, such as handwriting recognition ( graves 2008 graves and schmidhuber 2009 et al., ;, ), speech recogni - tion ( graves and schmidhuber 2005 graves 2013 baldi, ; et al., ) and bioinformatics ( et al., ). 1999 as the name suggests, bid
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 410
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
, ;, ), speech recogni - tion ( graves and schmidhuber 2005 graves 2013 baldi, ; et al., ) and bioinformatics ( et al., ). 1999 as the name suggests, bidirectional rnns combine an rnn that moves forward through time beginning from the start of the sequence with another rnn that moves backward through time beginning from the end of the sequence. figure 10. 11 illustrates the typical bidirectional rnn, with h ( ) t standing for the state of the sub - rnn that moves forward through time and g ( ) t standing for the state of the sub - rnn that moves backward through time. this allows the output units o ( ) t to compute a representation that depends on both the past and the future but is most sensitive to the input values around time t, without having to specify a fixed - size window around t ( as one would have to do with a feedforward network, a convolutional network, or a regular rnn with a fixed - size look - ahead [UNK] ). this idea can be naturally extended to 2 - dimensional input, such as images, by having rnns, each one going in one of the
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 410
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##lutional network, or a regular rnn with a fixed - size look - ahead [UNK] ). this idea can be naturally extended to 2 - dimensional input, such as images, by having rnns, each one going in one of the four directions : up, down, left, four right. at each point ( i, j ) of a 2 - d grid, an output oi, j could then compute a representation that would capture mostly local information but could also depend on long - range inputs, if the rnn is able to learn to carry that information. compared to a convolutional network, rnns applied to images are typically more expensive but allow for long - range lateral interactions between features in the same feature map (, ; visin et al. 2015 kalchbrenner 2015 et al., ). indeed, the forward propagation equations for such rnns may be written in a form that shows they use a convolution that computes the bottom - up input to each layer, prior to the recurrent propagation across the feature map that incorporates the lateral interactions. 395
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 410
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 10. sequence modeling : recurrent and recursive nets 10. 4 encoder - decoder sequence - to - sequence architec - tures we have seen in figure how an rnn can map an input sequence to a fixed - size 10. 5 vector. we have seen in figure how an rnn can map a fixed - size vector to a 10. 9 sequence. we have seen in figures,, and how an rnn can 10. 3 10. 4 10. 10 10. 11 map an input sequence to an output sequence of the same length. encoder … x ( 1 ) x ( 1 ) x ( 2 ) x ( 2 ) x ( )... x ( )... x ( nx ) x ( nx ) decoder … y ( 1 ) y ( 1 ) y ( 2 ) y ( 2 ) y ( )... y ( )... y ( ny ) y ( ny ) c figure 10. 12 : example of an encoder - decoder or sequence - to - sequence rnn architecture, for learning to generate an output sequence ( y ( 1 ),..., y ( n y ) ) given
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 411
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
) c figure 10. 12 : example of an encoder - decoder or sequence - to - sequence rnn architecture, for learning to generate an output sequence ( y ( 1 ),..., y ( n y ) ) given an input sequence ( x ( 1 ), x ( 2 ),..., x ( nx ) ). it is composed of an encoder rnn that reads the input sequence and a decoder rnn that generates the output sequence ( or computes the probability of a given output sequence ). the final hidden state of the encoder rnn is used to compute a generally fixed - size context variablec which represents a semantic summary of the input sequence and is given as input to the decoder rnn. here we discuss how an rnn can be trained to map an input sequence to an output sequence which is not necessarily of the same length. this comes up in many applications, such as speech recognition, machine translation or question 396
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 411
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 10. sequence modeling : recurrent and recursive nets answering, where the input and output sequences in the training set are generally not of the same length ( although their lengths might be related ). we often call the input to the rnn the “ context. ” we want to produce a representation of this context, c. the context c might be a vector or sequence of vectors that summarize the input sequence x x = ( ( 1 ),..., x ( nx ) ). the simplest rnn architecture for mapping a variable - length sequence to another variable - length sequence was first proposed by ( ) and cho et al. 2014a shortly after by sutskever 2014 et al. ( ), who independently developed that archi - tecture and were the first to obtain state - of - the - art translation using this approach. the former system is based on scoring proposals generated by another machine translation system, while the latter uses a standalone recurrent network to generate the translations. these authors respectively called this architecture, illustrated in figure, the encoder - decoder or sequence - to - sequence architecture. the 10. 12 idea is very simple : ( 1 ) an encoder or reader or input rnn
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 412
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
these authors respectively called this architecture, illustrated in figure, the encoder - decoder or sequence - to - sequence architecture. the 10. 12 idea is very simple : ( 1 ) an encoder or reader or input rnn processes the input sequence. the encoder emits the context c, usually as a simple function of its final hidden state. ( 2 ) a decoder or writer or output rnn is conditioned on that fixed - length vector ( just like in figure ) to generate the output sequence 10. 9 y = ( y ( 1 ),..., y ( ny ) ). the innovation of this kind of architecture over those presented in earlier sections of this chapter is that the lengths nx and ny can vary from each other, while previous architectures constrained nx = ny = τ. in a sequence - to - sequence architecture, the two rnns are trained jointly to maximize the average of log p ( y ( 1 ),..., y ( ny ) | x ( 1 ),..., x ( nx ) ) over all the pairs of x and y sequences in the training set. the last state hnx of the encoder rnn is typically used
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 412
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
, y ( ny ) | x ( 1 ),..., x ( nx ) ) over all the pairs of x and y sequences in the training set. the last state hnx of the encoder rnn is typically used as a representation c of the input sequence that is provided as input to the decoder rnn. if the context c is a vector, then the decoder rnn is simply a vector - to - sequence rnn as described in section. as we have seen, there are at least 10. 2. 4 two ways for a vector - to - sequence rnn to receive input. the input can be provided as the initial state of the rnn, or the input can be connected to the hidden units at each time step. these two ways can also be combined. there is no constraint that the encoder must have the same size of hidden layer as the decoder. one clear limitation of this architecture is when the context c output by the encoder rnn has a dimension that is too small to properly summarize a long sequence. this phenomenon was observed by ( ) in the context bahdanau et al. 2015 of machine translation. they proposed to make c a variable - length sequence rather than
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 412
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
has a dimension that is too small to properly summarize a long sequence. this phenomenon was observed by ( ) in the context bahdanau et al. 2015 of machine translation. they proposed to make c a variable - length sequence rather than a fixed - size vector. additionally, they introduced an attention mechanism that learns to associate elements of the sequence c to elements of the output 397
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 412
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 10. sequence modeling : recurrent and recursive nets sequence. see section for more details. 12. 4. 5. 1 10. 5 deep recurrent networks the computation in most rnns can be decomposed into three blocks of parameters and associated transformations : 1. from the input to the hidden state, 2. from the previous hidden state to the next hidden state, and 3. from the hidden state to the output. with the rnn architecture of figure, each of these three blocks is associated 10. 3 with a single weight matrix. in other words, when the network is unfolded, each of these corresponds to a shallow transformation. by a shallow transformation, we mean a transformation that would be represented by a single layer within a deep mlp. typically this is a transformation represented by a learned [UNK] transformation followed by a fixed nonlinearity. would it be advantageous to introduce depth in each of these operations? experimental evidence ( graves 2013 pascanu 2014a et al., ; et al., ) strongly suggests so. the experimental evidence is in agreement with the idea that we need enough depth in order to perform the required mappings. see also schmidhuber 1992 ( ), el hihi and bengio 1996 jaeger
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 413
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
al., ) strongly suggests so. the experimental evidence is in agreement with the idea that we need enough depth in order to perform the required mappings. see also schmidhuber 1992 ( ), el hihi and bengio 1996 jaeger 2007a ( ), or ( ) for earlier work on deep rnns. graves 2013 et al. ( ) were the first to show a significant benefit of decomposing the state of an rnn into multiple layers as in figure ( left ). we can think 10. 13 of the lower layers in the hierarchy depicted in figure a as playing a role 10. 13 in transforming the raw input into a representation that is more appropriate, at the higher levels of the hidden state. pascanu 2014a et al. ( ) go a step further and propose to have a separate mlp ( possibly deep ) for each of the three blocks enumerated above, as illustrated in figure b. considerations of representational 10. 13 capacity suggest to allocate enough capacity in each of these three steps, but doing so by adding depth may hurt learning by making optimization [UNK]. in general, it is easier to optimize shallower architectures, and
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 413
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
representational 10. 13 capacity suggest to allocate enough capacity in each of these three steps, but doing so by adding depth may hurt learning by making optimization [UNK]. in general, it is easier to optimize shallower architectures, and adding the extra depth of figure b makes the shortest path from a variable in time step 10. 13 t to a variable in time step t + 1 become longer. for example, if an mlp with a single hidden layer is used for the state - to - state transition, we have doubled the length of the shortest path between variables in any two [UNK] time steps, compared with the ordinary rnn of figure. however, as argued by 10. 3 pascanu 2014a et al. ( ), this 398
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 413
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 10. sequence modeling : recurrent and recursive nets h y x z ( a ) ( b ) ( c ) x h y x h y figure 10. 13 : a recurrent neural network can be made deep in many ways ( pascanu et al., ). the hidden recurrent state can be broken down into groups organized 2014a ( a ) hierarchically. deeper computation ( e. g., an mlp ) can be introduced in the input - to - ( b ) hidden, hidden - to - hidden and hidden - to - output parts. this may lengthen the shortest path linking [UNK] time steps. the path - lengthening [UNK] can be mitigated by ( c ) introducing skip connections. 399
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 414
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 10. sequence modeling : recurrent and recursive nets can be mitigated by introducing skip connections in the hidden - to - hidden path, as illustrated in figure c. 10. 13 10. 6 recursive neural networks x ( 1 ) x ( 1 ) x ( 2 ) x ( 2 ) x ( 3 ) x ( 3 ) v v v y l x ( 4 ) x ( 4 ) v o u w u w u w figure 10. 14 : a recursive network has a computational graph that generalizes that of the recurrent network from a chain to a tree. a variable - size sequencex ( 1 ), x ( 2 ),..., x ( ) t can be mapped to a fixed - size representation ( the outputo ), with a fixed set of parameters ( the weight matrices u, v, w ). the figure illustrates a supervised learning case in which some target is provided which is associated with the whole sequence. y recursive neural networks2 represent yet another generalization of recurrent networks, with a [UNK] kind of computational graph, which is structured as a deep tree, rather than the chain - like structure of rnns. the typical computational graph for a rec
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 415
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
recursive neural networks2 represent yet another generalization of recurrent networks, with a [UNK] kind of computational graph, which is structured as a deep tree, rather than the chain - like structure of rnns. the typical computational graph for a recursive network is illustrated in figure. recursive neural 10. 14 2we suggest to not abbreviate “ recursive neural network ” as “ rnn ” to avoid confusion with “ recurrent neural network. ” 400
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 415
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 10. sequence modeling : recurrent and recursive nets networks were introduced by pollack 1990 ( ) and their potential use for learning to reason was described by ( ). recursive networks have been successfully bottou 2011 applied to processing data structures as input to neural nets ( frasconi 1997 et al.,, 1998 socher 2011a c 2013a ), in natural language processing ( et al.,,, ) as well as in computer vision (, ). socher et al. 2011b one clear advantage of recursive nets over recurrent nets is that for a sequence of the same length τ, the depth ( measured as the number of compositions of nonlinear operations ) can be drastically reduced from τ to o ( log τ ), which might help deal with long - term dependencies. an open question is how to best structure the tree. one option is to have a tree structure which does not depend on the data, such as a balanced binary tree. in some application domains, external methods can suggest the appropriate tree structure. for example, when processing natural language sentences, the tree structure for the recursive network can be fixed to the structure of the parse tree of the sentence provided by a natural language parser (,,
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 416
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
can suggest the appropriate tree structure. for example, when processing natural language sentences, the tree structure for the recursive network can be fixed to the structure of the parse tree of the sentence provided by a natural language parser (,, ). ideally, one would like the learner itself to socher et al. 2011a 2013a discover and infer the tree structure that is appropriate for any given input, as suggested by ( ). bottou 2011 many variants of the recursive net idea are possible. for example, frasconi et al. ( ) and 1997 frasconi 1998 et al. ( ) associate the data with a tree structure, and associate the inputs and targets with individual nodes of the tree. the computation performed by each node does not have to be the traditional artificial neuron computation ( [UNK] transformation of all inputs followed by a monotone nonlinearity ). for example, ( ) propose using tensor operations socher et al. 2013a and bilinear forms, which have previously been found useful to model relationships between concepts ( weston 2010 bordes 2012 et al., ; et al., ) when the concepts are represented by continuous vectors ( embeddings ). 10. 7 the challenge of long
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 416
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
, which have previously been found useful to model relationships between concepts ( weston 2010 bordes 2012 et al., ; et al., ) when the concepts are represented by continuous vectors ( embeddings ). 10. 7 the challenge of long - term dependencies the mathematical challenge of learning long - term dependencies in recurrent net - works was introduced in section. the basic problem is that gradients prop - 8. 2. 5 agated over many stages tend to either vanish ( most of the time ) or explode ( rarely, but with much damage to the optimization ). even if we assume that the parameters are such that the recurrent network is stable ( can store memories, with gradients not exploding ), the [UNK] with long - term dependencies arises from the exponentially smaller weights given to long - term interactions ( involving the multiplication of many jacobians ) compared to short - term ones. many other sources provide a deeper treatment (, ; hochreiter 1991 doya 1993 bengio, ; et al., 401
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 416
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 10. sequence modeling : recurrent and recursive nets − − − 60 40 20 0 20 40 60 input coordinate −4 −3 −2 −1 0 1 2 3 4 projection of output 0 1 2 3 4 5 figure 10. 15 : when composing many nonlinear functions ( like the linear - tanh layer shown here ), the result is highly nonlinear, typically with most of the values associated with a tiny derivative, some values with a large derivative, and many alternations between increasing and decreasing. in this plot, we plot a linear projection of a 100 - dimensional hidden state down to a single dimension, plotted on the y - axis. the x - axis is the coordinate of the initial state along a random direction in the 100 - dimensional space. we can thus view this plot as a linear cross - section of a high - dimensional function. the plots show the function after each time step, or equivalently, after each number of times the transition function has been composed. 1994 pascanu 2013 ; et al., ). in this section, we describe the problem in more detail. the remaining sections describe approaches to overcoming the problem. recurrent networks involve the composition of the same function multiple times, once per time step. these compositions can result
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 417
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
et al., ). in this section, we describe the problem in more detail. the remaining sections describe approaches to overcoming the problem. recurrent networks involve the composition of the same function multiple times, once per time step. these compositions can result in extremely nonlinear behavior, as illustrated in figure. 10. 15 in particular, the function composition employed by recurrent neural networks somewhat resembles matrix multiplication. we can think of the recurrence relation h ( ) t = w h ( 1 ) t− ( 10. 36 ) as a very simple recurrent neural network lacking a nonlinear activation function, and lacking inputs x. as described in section, this recurrence relation 8. 2. 5 essentially describes the power method. it may be simplified to h ( ) t = w th ( 0 ), ( 10. 37 ) and if admits an eigendecomposition of the form w w q q = λ, ( 10. 38 ) 402
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 417
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 10. sequence modeling : recurrent and recursive nets with orthogonal, the recurrence may be simplified further to q h ( ) t = qλtqh ( 0 ). ( 10. 39 ) the eigenvalues are raised to the power of t causing eigenvalues with magnitude less than one to decay to zero and eigenvalues with magnitude greater than one to explode. any component of h ( 0 ) that is not aligned with the largest eigenvector will eventually be discarded. this problem is particular to recurrent networks. in the scalar case, imagine multiplying a weight w by itself many times. the product wt will either vanish or explode depending on the magnitude of w. however, if we make a non - recurrent network that has a [UNK] weightw ( ) t at each time step, the situation is [UNK]. if the initial state is given by, then the state at time 1 t is given by tw ( ) t. suppose that the w ( ) t values are generated randomly, independently from one another, with zero mean and variance v. the variance of the product is o ( vn ). to obtain some desired variance v∗we may choose the individual
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 418
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
t. suppose that the w ( ) t values are generated randomly, independently from one another, with zero mean and variance v. the variance of the product is o ( vn ). to obtain some desired variance v∗we may choose the individual weights with variance v = n√ v∗. very deep feedforward networks with carefully chosen scaling can thus avoid the vanishing and exploding gradient problem, as argued by ( ). sussillo 2014 the vanishing and exploding gradient problem for rnns was independently discovered by separate researchers (, ;,, ). hochreiter 1991 bengio et al. 1993 1994 one may hope that the problem can be avoided simply by staying in a region of parameter space where the gradients do not vanish or explode. unfortunately, in order to store memories in a way that is robust to small perturbations, the rnn must enter a region of parameter space where gradients vanish (,, bengio et al. 1993 1994 ). specifically, whenever the model is able to represent long term dependencies, the gradient of a long term interaction has exponentially smaller magnitude than the gradient of a short term interaction. it does not mean that it is impossible to learn, but that it might take a very long time
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 418
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.