text
stringlengths 35
1.54k
| source
stringclasses 1
value | page
int64 1
800
| book
stringclasses 1
value | chunk_index
int64 0
0
|
|---|---|---|---|---|
simulated annealing, which adds noise to the parameters ( kirkpatrick 1983 et al., ). continuation methods have been extremely successful in recent years. see mobahi and fisher ( ) for an overview of recent literature, especially for ai applications. 2015 continuation methods traditionally were mostly designed with the goal of overcoming the challenge of local minima. specifically, they were designed to reach a global minimum despite the presence of many local minima. to do so, these continuation methods would construct easier cost functions by “ blurring ” the original cost function. this blurring operation can be done by approximating j ( ) i ( ) = θ [UNK] ( θ ; θ, σ ( ) 2 i ) j ( θ ) ( 8. 40 ) via sampling. the intuition for this approach is that some non - convex functions 327
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 342
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 8. optimization for training deep models become approximately convex when blurred. in many cases, this blurring preserves enough information about the location of a global minimum that we can find the global minimum by solving progressively less blurred versions of the problem. this approach can break down in three [UNK] ways. first, it might successfully define a series of cost functions where the first is convex and the optimum tracks from one function to the next arriving at the global minimum, but it might require so many incremental cost functions that the cost of the entire procedure remains high. np - hard optimization problems remain np - hard, even when continuation methods are applicable. the other two ways that continuation methods fail both correspond to the method not being applicable. first, the function might not become convex, no matter how much it is blurred. consider for example the function j ( θ ) = −θθ. second, the function may become convex as a result of blurring, but the minimum of this blurred function may track to a local rather than a global minimum of the original cost function. though continuation methods were mostly originally designed to deal with the problem of local minima, local minima are no longer believed to be the primary problem for neural network optimization. fortunately, continuation methods
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 343
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
local rather than a global minimum of the original cost function. though continuation methods were mostly originally designed to deal with the problem of local minima, local minima are no longer believed to be the primary problem for neural network optimization. fortunately, continuation methods can still help. the easier objective functions introduced by the continuation method can eliminate flat regions, decrease variance in gradient estimates, improve conditioning of the hessian matrix, or do anything else that will either make local updates easier to compute or improve the correspondence between local update directions and progress toward a global solution. bengio 2009 et al. ( ) observed that an approach called curriculum learning or shaping can be interpreted as a continuation method. curriculum learning is based on the idea of planning a learning process to begin by learning simple concepts and progress to learning more complex concepts that depend on these simpler concepts. this basic strategy was previously known to accelerate progress in animal training (, ;, ; skinner 1958 peterson 2004 krueger and dayan 2009, ) and machine learning (, ;, ;, ). ( ) [UNK] elman 1993 sanger 1994 bengio et al. 2009 justified this strategy as a continuation method, where earlier j ( ) i are made easier by increasing the influence of simpler examples ( either
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 343
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
, ). ( ) [UNK] elman 1993 sanger 1994 bengio et al. 2009 justified this strategy as a continuation method, where earlier j ( ) i are made easier by increasing the influence of simpler examples ( either by assigning their contributions to the cost function larger [UNK], or by sampling them more frequently ), and experimentally demonstrated that better results could be obtained by following a curriculum on a large - scale neural language modeling task. curriculum learning has been successful on a wide range of natural language ( spitkovsky 2010 et al., ; collobert 2011a mikolov 2011b tu and honavar 2011 et al., ; et al., ;, ) and computer vision (, ;, ;, ) kumar et al. 2010 lee and grauman 2011 supancic and ramanan 2013 tasks. curriculum learning was also verified as being consistent with the way in which humans teach (, ) : teachers start by showing easier and khan et al. 2011 328
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 343
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 8. optimization for training deep models more prototypical examples and then help the learner refine the decision surface with the less obvious cases. curriculum - based strategies are more [UNK] for teaching humans than strategies based on uniform sampling of examples, and can also increase the [UNK] of other teaching strategies (, basu and christensen 2013 ). another important contribution to research on curriculum learning arose in the context of training recurrent neural networks to capture long - term dependencies : zaremba and sutskever 2014 ( ) found that much better results were obtained with a stochastic curriculum, in which a random mix of easy and [UNK] examples is always presented to the learner, but where the average proportion of the more [UNK] examples ( here, those with longer - term dependencies ) is gradually increased. with a deterministic curriculum, no improvement over the baseline ( ordinary training from the full training set ) was observed. we have now described the basic family of neural network models and how to regularize and optimize them. in the chapters ahead, we turn to specializations of the neural network family, that allow neural networks to scale to very large sizes and process input data that has special structure. the optimization methods discussed in this chapter are often directly applicable to these specialized
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 344
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
. in the chapters ahead, we turn to specializations of the neural network family, that allow neural networks to scale to very large sizes and process input data that has special structure. the optimization methods discussed in this chapter are often directly applicable to these specialized architectures with little or no modification. 329
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 344
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 9 convolutional networks convolutional networks (, ), also known as lecun 1989 convolutional neural networks or cnns, are a specialized kind of neural network for processing data that has a known, grid - like topology. examples include time - series data, which can be thought of as a 1d grid taking samples at regular time intervals, and image data, which can be thought of as a 2d grid of pixels. convolutional networks have been tremendously successful in practical applications. the name “ convolutional neural network ” indicates that the network employs a mathematical operation called convolution. convolution is a specialized kind of linear operation. convolutional networks are simply neural networks that use convolution in place of general matrix multiplication in at least one of their layers. in this chapter, we will first describe what convolution is. next, we will explain the motivation behind using convolution in a neural network. we will then describe an operation called pooling, which almost all convolutional networks employ. usually, the operation used in a convolutional neural network does not correspond precisely to the defini
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 345
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##tion in a neural network. we will then describe an operation called pooling, which almost all convolutional networks employ. usually, the operation used in a convolutional neural network does not correspond precisely to the definition of convolution as used in other fields such as engineering or pure mathematics. we will describe several variants on the convolution function that are widely used in practice for neural networks. we will also show how convolution may be applied to many kinds of data, with [UNK] numbers of dimensions. we then discuss means of making convolution more [UNK]. convolutional networks stand out as an example of neuroscientific principles influencing deep learning. we will discuss these neuroscientific principles, then conclude with comments about the role convolutional networks have played in the history of deep learning. one topic this chapter does not address is how to choose the architecture of your convolutional network. the goal of this chapter is to describe the kinds of tools that convolutional networks provide, while chapter 11 330
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 345
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 9. convolutional networks describes general guidelines for choosing which tools to use in which circumstances. research into convolutional network architectures proceeds so rapidly that a new best architecture for a given benchmark is announced every few weeks to months, rendering it impractical to describe the best architecture in print. however, the best architectures have consistently been composed of the building blocks described here. 9. 1 the convolution operation in its most general form, convolution is an operation on two functions of a real - valued argument. to motivate the definition of convolution, we start with examples of two functions we might use. suppose we are tracking the location of a spaceship with a laser sensor. our laser sensor provides a single output x ( t ), the position of the spaceship at time t. both x and t are real - valued, i. e., we can get a [UNK] reading from the laser sensor at any instant in time. now suppose that our laser sensor is somewhat noisy. to obtain a less noisy estimate of the spaceship ’ s position, we would like to average together several measurements. of course, more recent measurements are more relevant, so we will want this to be a
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 346
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
now suppose that our laser sensor is somewhat noisy. to obtain a less noisy estimate of the spaceship ’ s position, we would like to average together several measurements. of course, more recent measurements are more relevant, so we will want this to be a weighted average that gives more weight to recent measurements. we can do this with a weighting function w ( a ), where a is the age of a measurement. if we apply such a weighted average operation at every moment, we obtain a new function providing a smoothed estimate of the position of the spaceship : s s t ( ) = x a w t a da ( ) ( − ) ( 9. 1 ) this operation is called convolution. the convolution operation is typically denoted with an asterisk : s t x w t ( ) = ( ∗ ) ( ) ( 9. 2 ) in our example, w needs to be a valid probability density function, or the output is not a weighted average. also, w needs to be for all negative arguments, 0 or it will look into the future, which is presumably beyond our capabilities. these limitations are particular to our example though. in general, convolution is defined for any functions for which the above integral is defi
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 346
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
negative arguments, 0 or it will look into the future, which is presumably beyond our capabilities. these limitations are particular to our example though. in general, convolution is defined for any functions for which the above integral is defined, and may be used for other purposes besides taking weighted averages. in convolutional network terminology, the first argument ( in this example, the function x ) to the convolution is often referred to as the input and the second 331
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 346
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 9. convolutional networks argument ( in this example, the function w ) as the kernel. the output is sometimes referred to as the. feature map in our example, the idea of a laser sensor that can provide measurements at every instant in time is not realistic. usually, when we work with data on a computer, time will be discretized, and our sensor will provide data at regular intervals. in our example, it might be more realistic to assume that our laser provides a measurement once per second. the time index t can then take on only integer values. if we now assume that x and w are defined only on integer t, we can define the discrete convolution : s t x w t ( ) = ( ∗ ) ( ) = ∞ a = −∞ x a w t a ( ) ( − ) ( 9. 3 ) in machine learning applications, the input is usually a multidimensional array of data and the kernel is usually a multidimensional array of parameters that are adapted by the learning algorithm. we will refer to these multidimensional arrays as tensors. because each element of the input and kernel must be explicitly stored separately, we usually assume that these functions are zero everywhere
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 347
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##ensional array of parameters that are adapted by the learning algorithm. we will refer to these multidimensional arrays as tensors. because each element of the input and kernel must be explicitly stored separately, we usually assume that these functions are zero everywhere but the finite set of points for which we store the values. this means that in practice we can implement the infinite summation as a summation over a finite number of array elements. finally, we often use convolutions over more than one axis at a time. for example, if we use a two - dimensional image i as our input, we probably also want to use a two - dimensional kernel : k s i, j i k i, j ( ) = ( ∗ ) ( ) = m n i m, n k i m, j n. ( ) ( − − ) ( 9. 4 ) convolution is commutative, meaning we can equivalently write : s i, j k i i, j ( ) = ( ∗ ) ( ) = m n i i m, j n k m, n. ( − − ) ( ) ( 9. 5 ) usually the latter formula is more straightforward to implement in a machine learning library,
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 347
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
, j ( ) = ( ∗ ) ( ) = m n i i m, j n k m, n. ( − − ) ( ) ( 9. 5 ) usually the latter formula is more straightforward to implement in a machine learning library, because there is less variation in the range of valid values of m and. n the commutative property of convolution arises because we have flipped the kernel relative to the input, in the sense that as m increases, the index into the input increases, but the index into the kernel decreases. the only reason to flip the kernel is to obtain the commutative property. while the commutative property 332
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 347
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 9. convolutional networks is useful for writing proofs, it is not usually an important property of a neural network implementation. instead, many neural network libraries implement a related function called the cross - correlation, which is the same as convolution but without flipping the kernel : s i, j i k i, j ( ) = ( ∗ ) ( ) = m n i i m, j n k m, n. ( + + ) ( ) ( 9. 6 ) many machine learning libraries implement cross - correlation but call it convolution. in this text we will follow this convention of calling both operations convolution, and specify whether we mean to flip the kernel or not in contexts where kernel flipping is relevant. in the context of machine learning, the learning algorithm will learn the appropriate values of the kernel in the appropriate place, so an algorithm based on convolution with kernel flipping will learn a kernel that is flipped relative to the kernel learned by an algorithm without the flipping. it is also rare for convolution to be used alone in machine learning ; instead convolution is used simultaneously with other functions, and the combination of these functions does not
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 348
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
to the kernel learned by an algorithm without the flipping. it is also rare for convolution to be used alone in machine learning ; instead convolution is used simultaneously with other functions, and the combination of these functions does not commute regardless of whether the convolution operation flips its kernel or not. see figure for an example of convolution ( without kernel flipping ) applied 9. 1 to a 2 - d tensor. discrete convolution can be viewed as multiplication by a matrix. however, the matrix has several entries constrained to be equal to other entries. for example, for univariate discrete convolution, each row of the matrix is constrained to be equal to the row above shifted by one element. this is known as a toeplitz matrix. in two dimensions, a doubly block circulant matrix corresponds to convolution. in addition to these constraints that several elements be equal to each other, convolution usually corresponds to a very sparse matrix ( a matrix whose entries are mostly equal to zero ). this is because the kernel is usually much smaller than the input image. any neural network algorithm that works with matrix multiplication and does not depend on
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 348
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##volution usually corresponds to a very sparse matrix ( a matrix whose entries are mostly equal to zero ). this is because the kernel is usually much smaller than the input image. any neural network algorithm that works with matrix multiplication and does not depend on specific properties of the matrix structure should work with convolution, without requiring any further changes to the neural network. typical convolutional neural networks do make use of further specializations in order to deal with large inputs [UNK], but these are not strictly necessary from a theoretical perspective. 333
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 348
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 9. convolutional networks a b c d e f g h i j k l w x y z aw + bx + ey + fz aw + bx + ey + fz bw + cx + fy + gz bw + cx + fy + gz cw + dx + gy + hz cw + dx + gy + hz ew + fx + iy + jz ew + fx + iy + jz fw + gx + jy + kz fw + gx + jy + kz gw + hx + ky + lz gw + hx + ky + lz input kernel output figure 9. 1 : an example of 2 - d convolution without kernel - flipping. in this case we restrict the output to only positions where the kernel lies entirely within the image, called “ valid ” convolution in some contexts. we draw boxes with arrows to indicate how the upper - left element of the output tensor is formed by applying the kernel to the corresponding upper - left region of the input tensor. 334
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 349
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 9. convolutional networks 9. 2 motivation convolution leverages three important ideas that can help improve a machine learning system : sparse interactions, parameter sharing and equivariant representations. moreover, convolution provides a means for working with inputs of variable size. we now describe each of these ideas in turn. traditional neural network layers use matrix multiplication by a matrix of parameters with a separate parameter describing the interaction between each input unit and each output unit. this means every output unit interacts with every input unit. convolutional networks, however, typically have sparse interactions ( also referred to as sparse connectivity or sparse weights ). this is accomplished by making the kernel smaller than the input. for example, when processing an image, the input image might have thousands or millions of pixels, but we can detect small, meaningful features such as edges with kernels that occupy only tens or hundreds of pixels. this means that we need to store fewer parameters, which both reduces the memory requirements of the model and improves its statistical [UNK]. it also means that computing the output requires fewer operations. these improvements in [UNK] are usually quite large. if there are m inputs and n outputs, then matrix multiplication requires m n × parameters and the algorithms used
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 350
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
of the model and improves its statistical [UNK]. it also means that computing the output requires fewer operations. these improvements in [UNK] are usually quite large. if there are m inputs and n outputs, then matrix multiplication requires m n × parameters and the algorithms used in practice have o ( m n × ) runtime ( per example ). if we limit the number of connections each output may have to k, then the sparsely connected approach requires only k n × parameters and o ( k n × ) runtime. for many practical applications, it is possible to obtain good performance on the machine learning task while keeping k several orders of magnitude smaller than m. for graphical demonstrations of sparse connectivity, see figure and figure. in a deep convolutional network, 9. 2 9. 3 units in the deeper layers may indirectly interact with a larger portion of the input, as shown in figure. this allows the network to [UNK] describe complicated 9. 4 interactions between many variables by constructing such interactions from simple building blocks that each describe only sparse interactions. parameter sharing refers to using the same parameter for more than one function in a model. in a traditional neural net, each element of the weight matrix is used exactly once when computing the output of a layer. it
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 350
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
blocks that each describe only sparse interactions. parameter sharing refers to using the same parameter for more than one function in a model. in a traditional neural net, each element of the weight matrix is used exactly once when computing the output of a layer. it is multiplied by one element of the input and then never revisited. as a synonym for parameter sharing, one can say that a network has tied weights, because the value of the weight applied to one input is tied to the value of a weight applied elsewhere. in a convolutional neural net, each member of the kernel is used at every position of the input ( except perhaps some of the boundary pixels, depending on the design decisions regarding the boundary ). the parameter sharing used by the convolution operation means that rather than learning a separate set of parameters 335
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 350
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 9. convolutional networks x1 x1 x2 x2 x3 x3 s2 s2 s1 s1 s3 s3 x4 x4 s4 s4 x5 x5 s5 s5 x1 x1 x2 x2 x3 x3 s2 s2 s1 s1 s3 s3 x4 x4 s4 s4 x5 x5 s5 s5 figure 9. 2 : sparse connectivity, viewed from below : we highlight one input unit, x3, and also highlight the output units in s that are [UNK] by this unit. ( top ) when s is formed by convolution with a kernel of width, only three outputs are [UNK] by 3 x. ( bottom ) when is formed by matrix multiplication, connectivity is no longer sparse, so s all of the outputs are [UNK] by x3. 336
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 351
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 9. convolutional networks x1 x1 x2 x2 x3 x3 s2 s2 s1 s1 s3 s3 x4 x4 s4 s4 x5 x5 s5 s5 x1 x1 x2 x2 x3 x3 s2 s2 s1 s1 s3 s3 x4 x4 s4 s4 x5 x5 s5 s5 figure 9. 3 : sparse connectivity, viewed from above : we highlight one output unit, s3, and also highlight the input units in x that [UNK] this unit. these units are known as the receptive field of s3. ( top ) when s is formed by convolution with a kernel of width, only three inputs [UNK] 3 s3. when ( bottom ) s is formed by matrix multiplication, connectivity is no longer sparse, so all of the inputs [UNK] s3. x1 x1 x2 x2 x3 x3 h2h2 h1h1 h3 h3 x4 x4 h4h4 x5 x5 h5h5 g2g2 g1g1 g3 g3 g4g4 g5g5 figure 9. 4 : the rec
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 352
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##1h1 h3 h3 x4 x4 h4h4 x5 x5 h5h5 g2g2 g1g1 g3 g3 g4g4 g5g5 figure 9. 4 : the receptive field of the units in the deeper layers of a convolutional network is larger than the receptive field of the units in the shallow layers. this [UNK] increases if the network includes architectural features like strided convolution ( figure ) or pooling 9. 12 ( section ). this means that even though 9. 3 direct connections in a convolutional net are very sparse, units in the deeper layers can be indirectly connected to all or most of the input image. 337
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 352
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 9. convolutional networks x1 x1 x2 x2 x3 x3 s2 s2 s1 s1 s3 s3 x4 x4 s4 s4 x5 x5 s5 s5 x1 x1 x2 x2 x3 x3 x4 x4 x5 x5 s2 s2 s1 s1 s3 s3 s4 s4 s5 s5 figure 9. 5 : parameter sharing : black arrows indicate the connections that use a particular parameter in two [UNK] models. ( top ) the black arrows indicate uses of the central element of a 3 - element kernel in a convolutional model. due to parameter sharing, this single parameter is used at all input locations. the single black arrow indicates ( bottom ) the use of the central element of the weight matrix in a fully connected model. this model has no parameter sharing so the parameter is used only once. for every location, we learn only one set. this does not [UNK] the runtime of forward propagation — it is still o ( k n × ) — but it does further reduce the storage requirements of the model to k parameters. recall that k is usually several orders of magnitude less than m. since m and n are
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 353
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
the runtime of forward propagation — it is still o ( k n × ) — but it does further reduce the storage requirements of the model to k parameters. recall that k is usually several orders of magnitude less than m. since m and n are usually roughly the same size, k is practically insignificant compared to m n ×. convolution is thus dramatically more [UNK] than dense matrix multiplication in terms of the memory requirements and statistical [UNK]. for a graphical depiction of how parameter sharing works, see figure. 9. 5 as an example of both of these first two principles in action, figure shows 9. 6 how sparse connectivity and parameter sharing can dramatically improve the [UNK] of a linear function for detecting edges in an image. in the case of convolution, the particular form of parameter sharing causes the layer to have a property called equivariance to translation. to say a function is equivariant means that if the input changes, the output changes in the same way. specifically, a function f ( x ) is equivariant to a function g if f ( g ( x ) ) = g ( f ( x ) ). in the case of convolu
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 353
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
the same way. specifically, a function f ( x ) is equivariant to a function g if f ( g ( x ) ) = g ( f ( x ) ). in the case of convolution, if we let g be any function that translates the input, i. e., shifts it, then the convolution function is equivariant to g. for example, let i be a function giving image brightness at integer coordinates. let g be a function 338
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 353
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 9. convolutional networks mapping one image function to another image function, such that i = g ( i ) is the image function with i ( x, y ) = i ( x −1, y ). this shifts every pixel of i one unit to the right. if we apply this transformation to i, then apply convolution, the result will be the same as if we applied convolution to i, then applied the transformation g to the output. when processing time series data, this means that convolution produces a sort of timeline that shows when [UNK] features appear in the input. if we move an event later in time in the input, the exact same representation of it will appear in the output, just later in time. similarly with images, convolution creates a 2 - d map of where certain features appear in the input. if we move the object in the input, its representation will move the same amount in the output. this is useful for when we know that some function of a small number of neighboring pixels is useful when applied to multiple input locations. for example, when processing images, it is useful to detect edges in the first layer of a convolutional network. the same edges
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 354
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
know that some function of a small number of neighboring pixels is useful when applied to multiple input locations. for example, when processing images, it is useful to detect edges in the first layer of a convolutional network. the same edges appear more or less everywhere in the image, so it is practical to share parameters across the entire image. in some cases, we may not wish to share parameters across the entire image. for example, if we are processing images that are cropped to be centered on an individual ’ s face, we probably want to extract [UNK] features at [UNK] locations — the part of the network processing the top of the face needs to look for eyebrows, while the part of the network processing the bottom of the face needs to look for a chin. convolution is not naturally equivariant to some other transformations, such as changes in the scale or rotation of an image. other mechanisms are necessary for handling these kinds of transformations. finally, some kinds of data cannot be processed by neural networks defined by matrix multiplication with a fixed - shape matrix. convolution enables processing of some of these kinds of data. we discuss this further in section. 9. 7 9. 3 pooling a typical layer of a
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 354
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
neural networks defined by matrix multiplication with a fixed - shape matrix. convolution enables processing of some of these kinds of data. we discuss this further in section. 9. 7 9. 3 pooling a typical layer of a convolutional network consists of three stages ( see figure ). 9. 7 in the first stage, the layer performs several convolutions in parallel to produce a set of linear activations. in the second stage, each linear activation is run through a nonlinear activation function, such as the rectified linear activation function. this stage is sometimes called the detector stage. in the third stage, we use a pooling function to modify the output of the layer further. a pooling function replaces the output of the net at a certain location with a summary statistic of the nearby outputs. for example, the max pooling ( zhou 339
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 354
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 9. convolutional networks figure 9. 6 : [UNK] of edge detection. the image on the right was formed by taking each pixel in the original image and subtracting the value of its neighboring pixel on the left. this shows the strength of all of the vertically oriented edges in the input image, which can be a useful operation for object detection. both images are 280 pixels tall. the input image is 320 pixels wide while the output image is 319 pixels wide. this transformation can be described by a convolution kernel containing two elements, and requires 319 × 280 × 3 = 267, 960 floating point operations ( two multiplications and one addition per output pixel ) to compute using convolution. to describe the same transformation with a matrix multiplication would take 320× 280× 319 × 280, or over eight billion, entries in the matrix, making convolution four billion times more [UNK] for representing this transformation. the straightforward matrix multiplication algorithm performs over sixteen billion floating point operations, making convolution roughly 60, 000 times more [UNK] computationally. of course, most of the entries of the matrix would be zero. if we stored only the nonzero entries of the matrix, then both matrix multiplication and convo
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 355
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
, making convolution roughly 60, 000 times more [UNK] computationally. of course, most of the entries of the matrix would be zero. if we stored only the nonzero entries of the matrix, then both matrix multiplication and convolution would require the same number of floating point operations to compute. the matrix would still need to contain 2 × 319 × 280 = 178, 640 entries. convolution is an extremely [UNK] way of describing transformations that apply the same linear transformation of a small, local region across the entire input. ( photo credit : paula goodfellow ) 340
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 355
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 9. convolutional networks convolutional layer input to layer convolution stage : a ne transform [UNK] detector stage : nonlinearity e. g., rectified linear pooling stage next layer input to layers convolution layer : a ne transform [UNK] detector layer : nonlinearity e. g., rectified linear pooling layer next layer complex layer terminology simple layer terminology figure 9. 7 : the components of a typical convolutional neural network layer. there are two commonly used sets of terminology for describing these layers. ( left ) in this terminology, the convolutional net is viewed as a small number of relatively complex layers, with each layer having many “ stages. ” in this terminology, there is a one - to - one mapping between kernel tensors and network layers. in this book we generally use this terminology. ( right ) in this terminology, the convolutional net is viewed as a larger number of simple layers ; every step of processing is regarded as a layer in its own right. this means that not every “ layer ” has parameters. 341
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 356
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 9. convolutional networks and chellappa 1988, ) operation reports the maximum output within a rectangular neighborhood. other popular pooling functions include the average of a rectangular neighborhood, the l2 norm of a rectangular neighborhood, or a weighted average based on the distance from the central pixel. in all cases, pooling helps to make the representation become approximately invariant to small translations of the input. invariance to translation means that if we translate the input by a small amount, the values of most of the pooled outputs do not change. see figure for an example of how this works. 9. 8 invariance to local translation can be a very useful property if we care more about whether some feature is present than exactly where it is. for example, when determining whether an image contains a face, we need not know the location of the eyes with pixel - perfect accuracy, we just need to know that there is an eye on the left side of the face and an eye on the right side of the face. in other contexts, it is more important to preserve the location of a feature. for example, if we want to find a corner defined by two edges meeting at a specific orientation, we need to preserve
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 357
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
of the face. in other contexts, it is more important to preserve the location of a feature. for example, if we want to find a corner defined by two edges meeting at a specific orientation, we need to preserve the location of the edges well enough to test whether they meet. the use of pooling can be viewed as adding an infinitely strong prior that the function the layer learns must be invariant to small translations. when this assumption is correct, it can greatly improve the statistical [UNK] of the network. pooling over spatial regions produces invariance to translation, but if we pool over the outputs of separately parametrized convolutions, the features can learn which transformations to become invariant to ( see figure ). 9. 9 because pooling summarizes the responses over a whole neighborhood, it is possible to use fewer pooling units than detector units, by reporting summary statistics for pooling regions spaced k pixels apart rather than 1 pixel apart. see figure for an example. this improves the computational [UNK] of the 9. 10 network because the next layer has roughly k times fewer inputs to process. when the number of parameters in the next layer is a function of its input size ( such as when
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 357
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##gure for an example. this improves the computational [UNK] of the 9. 10 network because the next layer has roughly k times fewer inputs to process. when the number of parameters in the next layer is a function of its input size ( such as when the next layer is fully connected and based on matrix multiplication ) this reduction in the input size can also result in improved statistical [UNK] and reduced memory requirements for storing the parameters. for many tasks, pooling is essential for handling inputs of varying size. for example, if we want to classify images of variable size, the input to the classification layer must have a fixed size. this is usually accomplished by varying the size of an [UNK] between pooling regions so that the classification layer always receives the same number of summary statistics regardless of the input size. for example, the final pooling layer of the network may be defined to output four sets of summary statistics, one for each quadrant of an image, regardless of the image size. 342
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 357
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 9. convolutional networks 0. 1 1. 0. 2 1. 1. 1. 0. 1 0. 2............ 0. 3 0. 1 1. 1. 0. 3 1. 0. 2 1............. detector stage pooling stage pooling stage detector stage figure 9. 8 : max pooling introduces invariance. ( top ) a view of the middle of the output of a convolutional layer. the bottom row shows outputs of the nonlinearity. the top row shows the outputs of max pooling, with a stride of one pixel between pooling regions and a pooling region width of three pixels. a view of the same network, after ( bottom ) the input has been shifted to the right by one pixel. every value in the bottom row has changed, but only half of the values in the top row have changed, because the max pooling units are only sensitive to the maximum value in the neighborhood, not its exact location. 343
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 358
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 9. convolutional networks large response in pooling unit large response in pooling unit large response in detector unit 1 large response in detector unit 3 figure 9. 9 : example of learned invariances : a pooling unit that pools over multiple features that are learned with separate parameters can learn to be invariant to transformations of the input. here we show how a set of three learned filters and a max pooling unit can learn to become invariant to rotation. all three filters are intended to detect a hand - written 5. each filter attempts to match a slightly [UNK] orientation of the 5. when a 5 appears in the input, the corresponding filter will match it and cause a large activation in a detector unit. the max pooling unit then has a large activation regardless of which detector unit was activated. we show here how the network processes two [UNK] inputs, resulting in two [UNK] detector units being activated. the [UNK] on the pooling unit is roughly the same either way. this principle is leveraged by maxout networks ( goodfellow et al., 2013a ) and other convolutional networks. max pooling over spatial positions is naturally invariant to translation ; this multi - channel approach is only necessary for learning other
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 359
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
is leveraged by maxout networks ( goodfellow et al., 2013a ) and other convolutional networks. max pooling over spatial positions is naturally invariant to translation ; this multi - channel approach is only necessary for learning other transformations. 0. 1 1. 0. 2 1. 0. 2 0. 1 0. 1 0. 0 0. 1 figure 9. 10 : pooling with downsampling. here we use max - pooling with a pool width of three and a stride between pools of two. this reduces the representation size by a factor of two, which reduces the computational and statistical burden on the next layer. note that the rightmost pooling region has a smaller size, but must be included if we do not want to ignore some of the detector units. 344
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 359
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 9. convolutional networks some theoretical work gives guidance as to which kinds of pooling one should use in various situations (, ). it is also possible to dynamically boureau et al. 2010 pool features together, for example, by running a clustering algorithm on the locations of interesting features (, ). this approach yields a boureau et al. 2011 [UNK] set of pooling regions for each image. another approach is to learn a single pooling structure that is then applied to all images (, ). jia et al. 2012 pooling can complicate some kinds of neural network architectures that use top - down information, such as boltzmann machines and autoencoders. these issues will be discussed further when we present these types of networks in part. iii pooling in convolutional boltzmann machines is presented in section. the 20. 6 inverse - like operations on pooling units needed in some [UNK] networks will be covered in section. 20. 10. 6 some examples of complete convolutional network architectures for classification using convolution and pooling are shown in figure. 9. 11 9. 4 convolution and pooling as an
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 360
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
10. 6 some examples of complete convolutional network architectures for classification using convolution and pooling are shown in figure. 9. 11 9. 4 convolution and pooling as an infinitely strong prior recall the concept of a prior probability distribution from section. this is 5. 2 a probability distribution over the parameters of a model that encodes our beliefs about what models are reasonable, before we have seen any data. priors can be considered weak or strong depending on how concentrated the probability density in the prior is. a weak prior is a prior distribution with high entropy, such as a gaussian distribution with high variance. such a prior allows the data to move the parameters more or less freely. a strong prior has very low entropy, such as a gaussian distribution with low variance. such a prior plays a more active role in determining where the parameters end up. an infinitely strong prior places zero probability on some parameters and says that these parameter values are completely forbidden, regardless of how much support the data gives to those values. we can imagine a convolutional net as being similar to a fully connected net, but with an infinitely strong
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 360
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
and says that these parameter values are completely forbidden, regardless of how much support the data gives to those values. we can imagine a convolutional net as being similar to a fully connected net, but with an infinitely strong prior over its weights. this infinitely strong prior says that the weights for one hidden unit must be identical to the weights of its neighbor, but shifted in space. the prior also says that the weights must be zero, except for in the small, spatially contiguous receptive field assigned to that hidden unit. overall, we can think of the use of convolution as introducing an infinitely strong prior probability distribution over the parameters of a layer. this prior 345
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 360
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 9. convolutional networks input image : 256x256x3 output of convolution + relu : 256x256x64 output of pooling with stride 4 : 64x64x64 output of convolution + relu : 64x64x64 output of pooling with stride 4 : 16x16x64 output of reshape to vector : 16, 384 units output of matrix multiply : 1, 000 units output of softmax : 1, 000 class probabilities input image : 256x256x3 output of convolution + relu : 256x256x64 output of pooling with stride 4 : 64x64x64 output of convolution + relu : 64x64x64 output of pooling to 3x3 grid : 3x3x64 output of reshape to vector : 576 units output of matrix multiply : 1, 000 units output of softmax : 1, 000 class probabilities input image : 256x256x3 output of convolution + relu : 256x256x64 output of pooling with stride 4 : 64x64x64 output of convolution +
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 361
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
000 class probabilities input image : 256x256x3 output of convolution + relu : 256x256x64 output of pooling with stride 4 : 64x64x64 output of convolution + relu : 64x64x64 output of convolution : 16x16x1, 000 output of average pooling : 1x1x1, 000 output of softmax : 1, 000 class probabilities output of pooling with stride 4 : 16x16x64 figure 9. 11 : examples of architectures for classification with convolutional networks. the specific strides and depths used in this figure are not advisable for real use ; they are designed to be very shallow in order to fit onto the page. real convolutional networks also often involve significant amounts of branching, unlike the chain structures used here for simplicity. ( left ) a convolutional network that processes a fixed image size. after alternating between convolution and pooling for a few layers, the tensor for the convolutional feature map is reshaped to flatten out the spatial dimensions.
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 361
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##tional network that processes a fixed image size. after alternating between convolution and pooling for a few layers, the tensor for the convolutional feature map is reshaped to flatten out the spatial dimensions. the rest of the network is an ordinary feedforward network classifier, as described in chapter. 6 ( center ) a convolutional network that processes a variable - sized image, but still maintains a fully connected section. this network uses a pooling operation with variably - sized pools but a fixed number of pools, in order to provide a fixed - size vector of 576 units to the fully connected portion of the network. a convolutional network that does not ( right ) have any fully connected weight layer. instead, the last convolutional layer outputs one feature map per class. the model presumably learns a map of how likely each class is to occur at each spatial location. averaging a feature map down to a single value provides the argument to the softmax classifier at the top. 346
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 361
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 9. convolutional networks says that the function the layer should learn contains only local interactions and is equivariant to translation. likewise, the use of pooling is an infinitely strong prior that each unit should be invariant to small translations. of course, implementing a convolutional net as a fully connected net with an infinitely strong prior would be extremely computationally wasteful. but thinking of a convolutional net as a fully connected net with an infinitely strong prior can give us some insights into how convolutional nets work. one key insight is that convolution and pooling can cause underfitting. like any prior, convolution and pooling are only useful when the assumptions made by the prior are reasonably accurate. if a task relies on preserving precise spatial information, then using pooling on all features can increase the training error. some convolutional network architectures (, ) are designed to szegedy et al. 2014a use pooling on some channels but not on other channels, in order to get both highly invariant features and features that will not underfit when the translation invariance prior is incorrect
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 362
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
) are designed to szegedy et al. 2014a use pooling on some channels but not on other channels, in order to get both highly invariant features and features that will not underfit when the translation invariance prior is incorrect. when a task involves incorporating information from very distant locations in the input, then the prior imposed by convolution may be inappropriate. another key insight from this view is that we should only compare convolu - tional models to other convolutional models in benchmarks of statistical learning performance. models that do not use convolution would be able to learn even if we permuted all of the pixels in the image. for many image datasets, there are separate benchmarks for models that are permutation invariant and must discover the concept of topology via learning, and models that have the knowledge of spatial relationships hard - coded into them by their designer. 9. 5 variants of the basic convolution function when discussing convolution in the context of neural networks, we usually do not refer exactly to the standard discrete convolution operation as it is usually understood in the mathematical literature. the functions used in practice [UNK] slightly. here we describe these [UNK] in detail, and
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 362
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##tion in the context of neural networks, we usually do not refer exactly to the standard discrete convolution operation as it is usually understood in the mathematical literature. the functions used in practice [UNK] slightly. here we describe these [UNK] in detail, and highlight some useful properties of the functions used in neural networks. first, when we refer to convolution in the context of neural networks, we usually actually mean an operation that consists of many applications of convolution in parallel. this is because convolution with a single kernel can only extract one kind of feature, albeit at many spatial locations. usually we want each layer of our network to extract many kinds of features, at many locations. 347
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 362
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 9. convolutional networks additionally, the input is usually not just a grid of real values. rather, it is a grid of vector - valued observations. for example, a color image has a red, green and blue intensity at each pixel. in a multilayer convolutional network, the input to the second layer is the output of the first layer, which usually has the output of many [UNK] convolutions at each position. when working with images, we usually think of the input and output of the convolution as being 3 - d tensors, with one index into the [UNK] channels and two indices into the spatial coordinates of each channel. software implementations usually work in batch mode, so they will actually use 4 - d tensors, with the fourth axis indexing [UNK] examples in the batch, but we will omit the batch axis in our description here for simplicity. because convolutional networks usually use multi - channel convolution, the linear operations they are based on are not guaranteed to be commutative, even if kernel - flipping is used. these multi - channel operations are only commutative if each operation has the same number of output channels as input channels. assume we have
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 363
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
they are based on are not guaranteed to be commutative, even if kernel - flipping is used. these multi - channel operations are only commutative if each operation has the same number of output channels as input channels. assume we have a 4 - d kernel tensor k with element ki, j, k, l giving the connection strength between a unit in channel i of the output and a unit in channel j of the input, with an [UNK] of k rows and l columns between the output unit and the input unit. assume our input consists of observed data v with element vi, j, k giving the value of the input unit within channel i at row j and column k. assume our output consists of z with the same format as v. if z is produced by convolving k across without flipping, then v k zi, j, k = l, m, n vl, j m, k n + −1 + −1ki, l, m, n ( 9. 7 ) where the summation over l, m and n is over all values for which the tensor indexing operations inside the summation is valid. in linear algebra notation, we index into arrays using a for the first entry. this necess
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 363
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
7 ) where the summation over l, m and n is over all values for which the tensor indexing operations inside the summation is valid. in linear algebra notation, we index into arrays using a for the first entry. this necessitates the 1 −1 in the above formula. programming languages such as c and python index starting from, rendering 0 the above expression even simpler. we may want to skip over some positions of the kernel in order to reduce the computational cost ( at the expense of not extracting our features as finely ). we can think of this as downsampling the output of the full convolution function. if we want to sample only every s pixels in each direction in the output, then we can define a downsampled convolution function such that c zi, j, k = ( ) c k v,, s i, j, k = l, m, n vl, j s m, k s n ( − × 1 ) + ( − × 1 ) + ki, l, m, n. ( 9. 8 ) we refer to s as the stride of this downsampled convolution. it is also possible 348
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 363
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 9. convolutional networks to define a separate stride for each direction of motion. see figure for an 9. 12 illustration. one essential feature of any convolutional network implementation is the ability to implicitly zero - pad the input v in order to make it wider. without this feature, the width of the representation shrinks by one pixel less than the kernel width at each layer. zero padding the input allows us to control the kernel width and the size of the output independently. without zero padding, we are forced to choose between shrinking the spatial extent of the network rapidly and using small kernels — both scenarios that significantly limit the expressive power of the network. see figure for an example. 9. 13 three special cases of the zero - padding setting are worth mentioning. one is the extreme case in which no zero - padding is used whatsoever, and the convolution kernel is only allowed to visit positions where the entire kernel is contained entirely within the image. in matlab terminology, this is called valid convolution. in this case, all pixels in the output are a function of the same number of pixels in the input, so the behavior of an output pixel
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 364
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
is contained entirely within the image. in matlab terminology, this is called valid convolution. in this case, all pixels in the output are a function of the same number of pixels in the input, so the behavior of an output pixel is somewhat more regular. however, the size of the output shrinks at each layer. if the input image has width m and the kernel has width k, the output will be of width m k − + 1. the rate of this shrinkage can be dramatic if the kernels used are large. since the shrinkage is greater than 0, it limits the number of convolutional layers that can be included in the network. as layers are added, the spatial dimension of the network will eventually drop to 1 × 1, at which point additional layers cannot meaningfully be considered convolutional. another special case of the zero - padding setting is when just enough zero - padding is added to keep the size of the output equal to the size of the input. matlab calls this same convolution. in this case, the network can contain as many convolutional layers as the available hardware can support, since the operation of convolution does not modify the architectural possibilities
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 364
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
the input. matlab calls this same convolution. in this case, the network can contain as many convolutional layers as the available hardware can support, since the operation of convolution does not modify the architectural possibilities available to the next layer. however, the input pixels near the border influence fewer output pixels than the input pixels near the center. this can make the border pixels somewhat underrepresented in the model. this motivates the other extreme case, which matlab refers to as full convolution, in which enough zeroes are added for every pixel to be visited k times in each direction, resulting in an output image of width m + k −1. in this case, the output pixels near the border are a function of fewer pixels than the output pixels near the center. this can make it [UNK] to learn a single kernel that performs well at all positions in the convolutional feature map. usually the optimal amount of zero padding ( in terms of test set classification accuracy ) lies somewhere between “ valid ” and “ same ” convolution. 349
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 364
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 9. convolutional networks x1 x1 x2 x2 x3 x3 s1s1 s2s2 x4 x4 x5 x5 s3s3 x1 x1 x2 x2 x3 x3 z2 z2 z1z1 z3z3 x4 x4 z4 z4 x5 x5 z5z5 s1s1 s2s2 s3s3 strided convolution downsampling convolution figure 9. 12 : convolution with a stride. in this example, we use a stride of two. ( top ) convolution with a stride length of two implemented in a single operation. ( bot - tom ) convolution with a stride greater than one pixel is mathematically equivalent to convolution with unit stride followed by downsampling. obviously, the two - step approach involving downsampling is computationally wasteful, because it computes many values that are then discarded. 350
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 365
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 9. convolutional networks........................... figure 9. 13 : the [UNK] of zero padding on network size : consider a convolutional network with a kernel of width six at every layer. in this example, we do not use any pooling, so only the convolution operation itself shrinks the network size. ( top ) in this convolutional network, we do not use any implicit zero padding. this causes the representation to shrink by five pixels at each layer. starting from an input of sixteen pixels, we are only able to have three convolutional layers, and the last layer does not ever move the kernel, so arguably only two of the layers are truly convolutional. the rate of shrinking can be mitigated by using smaller kernels, but smaller kernels are less expressive and some shrinking is inevitable in this kind of architecture. by adding five implicit zeroes ( bottom ) to each layer, we prevent the representation from shrinking with depth. this allows us to make an arbitrarily deep convolutional network. 351
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 366
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 9. convolutional networks in some cases, we do not actually want to use convolution, but rather locally connected layers (,, ). in this case, the adjacency matrix in the lecun 1986 1989 graph of our mlp is the same, but every connection has its own weight, specified by a 6 - d tensor w. the indices into w are respectively : i, the output channel, j, the output row, k, the output column, l, the input channel, m, the row [UNK] within the input, and n, the column [UNK] within the input. the linear part of a locally connected layer is then given by zi, j, k = l, m, n [ vl, j m, k n + −1 + −1wi, j, k, l, m, n ]. ( 9. 9 ) this is sometimes also called unshared convolution, because it is a similar oper - ation to discrete convolution with a small kernel, but without sharing parameters across locations. figure compares local connections, convolution, and full 9. 14 connections. locally connected layers are useful when we know that each
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 367
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
oper - ation to discrete convolution with a small kernel, but without sharing parameters across locations. figure compares local connections, convolution, and full 9. 14 connections. locally connected layers are useful when we know that each feature should be a function of a small part of space, but there is no reason to think that the same feature should occur across all of space. for example, if we want to tell if an image is a picture of a face, we only need to look for the mouth in the bottom half of the image. it can also be useful to make versions of convolution or locally connected layers in which the connectivity is further restricted, for example to constrain each output channel i to be a function of only a subset of the input channels l. a common way to do this is to make the first m output channels connect to only the first n input channels, the second m output channels connect to only the second n input channels, and so on. see figure for an example. modeling interactions 9. 15 between few channels allows the network to have fewer parameters in order to reduce memory consumption and increase statistical [UNK], and also reduces the amount of computation needed to perform forward and back - propagation.
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 367
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
see figure for an example. modeling interactions 9. 15 between few channels allows the network to have fewer parameters in order to reduce memory consumption and increase statistical [UNK], and also reduces the amount of computation needed to perform forward and back - propagation. it accomplishes these goals without reducing the number of hidden units. tiled convolution (, ;, ) [UNK] a com - gregor and lecun 2010a le et al. 2010 promise between a convolutional layer and a locally connected layer. rather than learning a separate set of weights at spatial location, we learn a set of kernels every that we rotate through as we move through space. this means that immediately neighboring locations will have [UNK] filters, like in a locally connected layer, but the memory requirements for storing the parameters will increase only by a factor of the size of this set of kernels, rather than the size of the entire output feature map. see figure for a comparison of locally connected layers, tiled 9. 16 convolution, and standard convolution. 352
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 367
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 9. convolutional networks x1 x1 x2 x2 x3 x3 s2s2 s1s1 s3 s3 x4 x4 s4s4 x5 x5 s5s5 x1 x1 x2 x2 s1s1 s3 s3 x5 x5 s5s5 x1 x1 x2 x2 x3 x3 s2s2 s1s1 s3 s3 x4 x4 s4s4 x5 x5 s5s5 a b a b a b a b a a b c d e f g h i x4 x4 x3 x3 s4s4 s2s2 figure 9. 14 : comparison of local connections, convolution, and full connections. ( top ) a locally connected layer with a patch size of two pixels. each edge is labeled with a unique letter to show that each edge is associated with its own weight parameter. ( center ) a convolutional layer with a kernel width of two pixels. this model has exactly the same connectivity as the locally connected layer. the [UNK] lies not in which units interact with each other, but in how the parameters are shared. the locally connected
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 368
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
convolutional layer with a kernel width of two pixels. this model has exactly the same connectivity as the locally connected layer. the [UNK] lies not in which units interact with each other, but in how the parameters are shared. the locally connected layer has no parameter sharing. the convolutional layer uses the same two weights repeatedly across the entire input, as indicated by the repetition of the letters labeling each edge. ( bottom ) a fully connected layer resembles a locally connected layer in the sense that each edge has its own parameter ( there are too many to label explicitly with letters in this diagram ). however, it does not have the restricted connectivity of the locally connected layer. 353
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 368
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 9. convolutional networks input tensor output tensor spatial coordinates channel coordinates figure 9. 15 : a convolutional network with the first two output channels connected to only the first two input channels, and the second two output channels connected to only the second two input channels. 354
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 369
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 9. convolutional networks x1 x1 x2 x2 x3 x3 s2s2 s1s1 s3 s3 x4 x4 s4s4 x5 x5 s5s5 x1 x1 x2 x2 x3 x3 s2s2 s1s1 s3 s3 x4 x4 s4s4 x5 x5 s5s5 a b a b a b a b a a b c d e f g h i x1 x1 x2 x2 x3 x3 s2s2 s1s1 s3 s3 x4 x4 s4s4 x5 x5 s5s5 a b c d a b c d a figure 9. 16 : a comparison of locally connected layers, tiled convolution, and standard convolution. all three have the same sets of connections between units, when the same size of kernel is used. this diagram illustrates the use of a kernel that is two pixels wide. the [UNK] between the methods lies in how they share parameters. ( top ) a locally connected layer has no sharing at all. we indicate that each connection has its own weight by labeling each connection with a unique
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 370
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
of a kernel that is two pixels wide. the [UNK] between the methods lies in how they share parameters. ( top ) a locally connected layer has no sharing at all. we indicate that each connection has its own weight by labeling each connection with a unique letter. tiled convolution has a set of ( center ) t [UNK] kernels. here we illustrate the case of t = 2. one of these kernels has edges labeled “ a ” and “ b, ” while the other has edges labeled “ c ” and “ d. ” each time we move one pixel to the right in the output, we move on to using a [UNK] kernel. this means that, like the locally connected layer, neighboring units in the output have [UNK] parameters. unlike the locally connected layer, after we have gone through allt available kernels, we cycle back to the first kernel. if two output units are separated by a multiple of t steps, then they share parameters. traditional convolution is equivalent to tiled ( bottom ) convolution with t = 1. there is only one kernel and it is applied everywhere, as indicated in the diagram by using the kernel with weights labeled “ a ” and “ b ” everywhere. 355
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 370
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 9. convolutional networks to define tiled convolution algebraically, let k be a 6 - d tensor, where two of the dimensions correspond to [UNK] locations in the output map. rather than having a separate index for each location in the output map, output locations cycle through a set of t [UNK] choices of kernel stack in each direction. if t is equal to the output width, this is the same as a locally connected layer. zi, j, k = l, m, n vl, j m, k n + −1 + −1ki, l, m, n, j t, k t % + 1 % + 1, ( 9. 10 ) where is the modulo operation, with % t % t = 0 (, t + 1 ) % t = 1, etc. it is straightforward to generalize this equation to use a [UNK] tiling range for each dimension. both locally connected layers and tiled convolutional layers have an interesting interaction with max - pooling : the detector units of these layers are driven by [UNK] filters. if these filters learn to detect [UNK] transformed versions of the same underlying features, then the max - pooled units become invariant to the learned
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 371
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
interaction with max - pooling : the detector units of these layers are driven by [UNK] filters. if these filters learn to detect [UNK] transformed versions of the same underlying features, then the max - pooled units become invariant to the learned transformation ( see figure ). convolutional layers are hard - coded to be 9. 9 invariant specifically to translation. other operations besides convolution are usually necessary to implement a convolutional network. to perform learning, one must be able to compute the gradient with respect to the kernel, given the gradient with respect to the outputs. in some simple cases, this operation can be performed using the convolution operation, but many cases of interest, including the case of stride greater than 1, do not have this property. recall that convolution is a linear operation and can thus be described as a matrix multiplication ( if we first reshape the input tensor into a flat vector ). the matrix involved is a function of the convolution kernel. the matrix is sparse and each element of the kernel is copied to several elements of the matrix. this view helps us to derive some of the other operations needed to implement a convolu
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 371
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
matrix involved is a function of the convolution kernel. the matrix is sparse and each element of the kernel is copied to several elements of the matrix. this view helps us to derive some of the other operations needed to implement a convolutional network. multiplication by the transpose of the matrix defined by convolution is one such operation. this is the operation needed to back - propagate error derivatives through a convolutional layer, so it is needed to train convolutional networks that have more than one hidden layer. this same operation is also needed if we wish to reconstruct the visible units from the hidden units (, ). simard et al. 1992 reconstructing the visible units is an operation commonly used in the models described in part of this book, such as autoencoders, rbms, and sparse coding. iii transpose convolution is necessary to construct convolutional versions of those models. like the kernel gradient operation, this input gradient operation can be 356
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 371
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 9. convolutional networks implemented using a convolution in some cases, but in the general case requires a third operation to be implemented. care must be taken to coordinate this transpose operation with the forward propagation. the size of the output that the transpose operation should return depends on the zero padding policy and stride of the forward propagation operation, as well as the size of the forward propagation ’ s output map. in some cases, multiple sizes of input to forward propagation can result in the same size of output map, so the transpose operation must be explicitly told what the size of the original input was. these three operations — convolution, backprop from output to weights, and backprop from output to inputs — are [UNK] to compute all of the gradients needed to train any depth of feedforward convolutional network, as well as to train convolutional networks with reconstruction functions based on the transpose of convolution. see ( ) for a full derivation of the equations in the goodfellow 2010 fully general multi - dimensional, multi - example case. to give a sense of how these equations work, we present the two dimensional, single example version here. suppose we want to
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 372
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
) for a full derivation of the equations in the goodfellow 2010 fully general multi - dimensional, multi - example case. to give a sense of how these equations work, we present the two dimensional, single example version here. suppose we want to train a convolutional network that incorporates strided convolution of kernel stack k applied to multi - channel image v with stride s as defined by c ( k v,, s ) as in equation. suppose we want to minimize some loss 9. 8 function j ( v k, ). during forward propagation, we will need to use c itself to output z, which is then propagated through the rest of the network and used to compute the cost function j. during back - propagation, we will receive a tensor g such that gi, j, k = ∂ ∂zi, j, k j,. ( v k ) to train the network, we need to compute the derivatives with respect to the weights in the kernel. to do so, we can use a function g,, s ( g v ) i, j, k, l = ∂ ∂ki, j, k, l j, ( v k ) = m, n gi, m, nvj,
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 372
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
so, we can use a function g,, s ( g v ) i, j, k, l = ∂ ∂ki, j, k, l j, ( v k ) = m, n gi, m, nvj, m s k, n s l ( − × 1 ) + ( − × 1 ) +. ( 9. 11 ) if this layer is not the bottom layer of the network, we will need to compute the gradient with respect to v in order to back - propagate the error farther down. to do so, we can use a function h,, s ( k g ) i, j, k = ∂ ∂vi, j, k j, ( v k ) ( 9. 12 ) = l, m s. t. ( 1 ) + = l− ×s m j n, p s. t. ( 1 ) + = n− ×s p k q kq, i, m, pgq, l, n. ( 9. 13 ) autoencoder networks, described in chapter, are feedforward networks 14 trained to copy their input to their output. a simple example is the pca algorithm, 357
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 372
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 9. convolutional networks that copies its input x to an approximate reconstruction r using the function w wx. it is common for more general autoencoders to use multiplication by the transpose of the weight matrix just as pca does. to make such models convolutional, we can use the function h to perform the transpose of the convolution operation. suppose we have hidden units h in the same format as z and we define a reconstruction r k h = ( h,, s. ) ( 9. 14 ) in order to train the autoencoder, we will receive the gradient with respect to r as a tensor e. to train the decoder, we need to obtain the gradient with respect to k. this is given by g ( h e,, s ). to train the encoder, we need to obtain the gradient with respect to h. this is given by c ( k e,, s ). it is also possible to [UNK] through g using c and h, but these operations are not needed for the back - propagation algorithm on any standard network architectures. generally, we do not use only a linear operation in order to transform from the inputs to the outputs in a
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 373
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
to [UNK] through g using c and h, but these operations are not needed for the back - propagation algorithm on any standard network architectures. generally, we do not use only a linear operation in order to transform from the inputs to the outputs in a convolutional layer. we generally also add some bias term to each output before applying the nonlinearity. this raises the question of how to share parameters among the biases. for locally connected layers it is natural to give each unit its own bias, and for tiled convolution, it is natural to share the biases with the same tiling pattern as the kernels. for convolutional layers, it is typical to have one bias per channel of the output and share it across all locations within each convolution map. however, if the input is of known, fixed size, it is also possible to learn a separate bias at each location of the output map. separating the biases may slightly reduce the statistical [UNK] of the model, but also allows the model to correct for [UNK] in the image statistics at [UNK] locations. for example, when using implicit zero padding, detector units at the edge of the image receive less total input and may need larger biases. 9. 6
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 373
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
, but also allows the model to correct for [UNK] in the image statistics at [UNK] locations. for example, when using implicit zero padding, detector units at the edge of the image receive less total input and may need larger biases. 9. 6 structured outputs convolutional networks can be used to output a high - dimensional, structured object, rather than just predicting a class label for a classification task or a real value for a regression task. typically this object is just a tensor, emitted by a standard convolutional layer. for example, the model might emit a tensor s, where si, j, k is the probability that pixel ( j, k ) of the input to the network belongs to class i. this allows the model to label every pixel in an image and draw precise masks that follow the outlines of individual objects. one issue that often comes up is that the output plane can be smaller than the 358
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 373
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 9. convolutional networks [UNK] ( 1 ) [UNK] ( 1 ) [UNK] ( 2 ) [UNK] ( 2 ) [UNK] ( 3 ) [UNK] ( 3 ) h ( 1 ) h ( 1 ) h ( 2 ) h ( 2 ) h ( 3 ) h ( 3 ) x u u u v v v w w figure 9. 17 : an example of a recurrent convolutional network for pixel labeling. the input is an image tensor, with axes corresponding to image rows, image columns, and x channels ( red, green, blue ). the goal is to output a tensor of labels [UNK], with a probability distribution over labels for each pixel. this tensor has axes corresponding to image rows, image columns, and the [UNK] classes. rather than outputting [UNK] in a single shot, the recurrent network iteratively refines its estimate [UNK] by using a previous estimate of [UNK] as input for creating a new estimate. the same parameters are used for each updated estimate, and the estimate can be refined as many times as we wish. the tensor of convolution kernels u is used on each step to compute the hidden representation given the input image. the kernel tensor v is used to produce an estimate of the labels given the
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 374
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
refined as many times as we wish. the tensor of convolution kernels u is used on each step to compute the hidden representation given the input image. the kernel tensor v is used to produce an estimate of the labels given the hidden values. on all but the first step, the kernels w are convolved over [UNK] to provide input to the hidden layer. on the first time step, this term is replaced by zero. because the same parameters are used on each step, this is an example of a recurrent network, as described in chapter. 10 input plane, as shown in figure. in the kinds of architectures typically used for 9. 13 classification of a single object in an image, the greatest reduction in the spatial dimensions of the network comes from using pooling layers with large stride. in order to produce an output map of similar size as the input, one can avoid pooling altogether (, ). another strategy is to simply emit a lower - resolution jain et al. 2007 grid of labels (,, ). finally, in principle, one could pinheiro and collobert 2014 2015 use a pooling operator with unit stride. one strategy for pixel - wise labeling of images is
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 374
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
- resolution jain et al. 2007 grid of labels (,, ). finally, in principle, one could pinheiro and collobert 2014 2015 use a pooling operator with unit stride. one strategy for pixel - wise labeling of images is to produce an initial guess of the image labels, then refine this initial guess using the interactions between neighboring pixels. repeating this refinement step several times corresponds to using the same convolutions at each stage, sharing weights between the last layers of the deep net (, ). this makes the sequence of computations performed jain et al. 2007 by the successive convolutional layers with weights shared across layers a particular kind of recurrent network (,, ). figure shows pinheiro and collobert 2014 2015 9. 17 the architecture of such a recurrent convolutional network. 359
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 374
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 9. convolutional networks once a prediction for each pixel is made, various methods can be used to further process these predictions in order to obtain a segmentation of the image into regions (, ; briggman et al. 2009 turaga 2010 farabet 2013 et al., ; et al., ). the general idea is to assume that large groups of contiguous pixels tend to be associated with the same label. graphical models can describe the probabilistic relationships between neighboring pixels. alternatively, the convolutional network can be trained to maximize an approximation of the graphical model training objective (, ;, ). ning et al. 2005 thompson et al. 2014 9. 7 data types the data used with a convolutional network usually consists of several channels, each channel being the observation of a [UNK] quantity at some point in space or time. see table for examples of data types with [UNK] dimensionalities 9. 1 and number of channels. for an example of convolutional networks applied to video, see chen et al. ( ). 2010 so far we have discussed only the case where every example in the train and test data has the same spatial dimensions. one advantage to convolutional networks is
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 375
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##lutional networks applied to video, see chen et al. ( ). 2010 so far we have discussed only the case where every example in the train and test data has the same spatial dimensions. one advantage to convolutional networks is that they can also process inputs with varying spatial extents. these kinds of input simply cannot be represented by traditional, matrix multiplication - based neural networks. this provides a compelling reason to use convolutional networks even when computational cost and overfitting are not significant issues. for example, consider a collection of images, where each image has a [UNK] width and height. it is unclear how to model such inputs with a weight matrix of fixed size. convolution is straightforward to apply ; the kernel is simply applied a [UNK] number of times depending on the size of the input, and the output of the convolution operation scales accordingly. convolution may be viewed as matrix multiplication ; the same convolution kernel induces a [UNK] size of doubly block circulant matrix for each size of input. sometimes the output of the network is allowed to have variable size as well as the input, for example if we want to assign a class label to each pixel
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 375
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
a [UNK] size of doubly block circulant matrix for each size of input. sometimes the output of the network is allowed to have variable size as well as the input, for example if we want to assign a class label to each pixel of the input. in this case, no further design work is necessary. in other cases, the network must produce some fixed - size output, for example if we want to assign a single class label to the entire image. in this case we must make some additional design steps, like inserting a pooling layer whose pooling regions scale in size proportional to the size of the input, in order to maintain a fixed number of pooled outputs. some examples of this kind of strategy are shown in figure. 9. 11 360
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 375
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 9. convolutional networks single channel multi - channel 1 - d audio waveform : the axis we convolve over corresponds to time. we discretize time and measure the amplitude of the waveform once per time step. skeleton animation data : anima - tions of 3 - d computer - rendered characters are generated by alter - ing the pose of a “ skeleton ” over time. at each point in time, the pose of the character is described by a specification of the angles of each of the joints in the charac - ter ’ s skeleton. each channel in the data we feed to the convolu - tional model represents the angle about one axis of one joint. 2 - d audio data that has been prepro - cessed with a fourier transform : we can transform the audio wave - form into a 2d tensor with dif - ferent rows corresponding to dif - ferent frequencies and [UNK] columns corresponding to [UNK] - ent points in time. using convolu - tion in the time makes the model equivariant to shifts in time. us - ing convolution across the fre - quency axis makes the model equivariant to frequency, so that
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 376
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
convolu - tion in the time makes the model equivariant to shifts in time. us - ing convolution across the fre - quency axis makes the model equivariant to frequency, so that the same melody played in a dif - ferent octave produces the same representation but at a [UNK] height in the network ’ s output. color image data : one channel contains the red pixels, one the green pixels, and one the blue pixels. the convolution kernel moves over both the horizontal and vertical axes of the image, conferring translation equivari - ance in both directions. 3 - d volumetric data : a common source of this kind of data is med - ical imaging technology, such as ct scans. color video data : one axis corre - sponds to time, one to the height of the video frame, and one to the width of the video frame. table 9. 1 : examples of [UNK] formats of data that can be used with convolutional networks. 361
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 376
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 9. convolutional networks note that the use of convolution for processing variable sized inputs only makes sense for inputs that have variable size because they contain varying amounts of observation of the same kind of thing — [UNK] lengths of recordings over time, [UNK] widths of observations over space, etc. convolution does not make sense if the input has variable size because it can optionally include [UNK] kinds of observations. for example, if we are processing college applications, and our features consist of both grades and standardized test scores, but not every applicant took the standardized test, then it does not make sense to convolve the same weights over both the features corresponding to the grades and the features corresponding to the test scores. 9. 8 [UNK] convolution algorithms modern convolutional network applications often involve networks containing more than one million units. powerful implementations exploiting parallel computation resources, as discussed in section, are essential. however, in many cases it 12. 1 is also possible to speed up convolution by selecting an appropriate convolution algorithm. convolution is equivalent to converting both the input and the kernel to the frequency domain using a fourier transform, performing point - wise multiplication of the two signals, and
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 377
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
speed up convolution by selecting an appropriate convolution algorithm. convolution is equivalent to converting both the input and the kernel to the frequency domain using a fourier transform, performing point - wise multiplication of the two signals, and converting back to the time domain using an inverse fourier transform. for some problem sizes, this can be faster than the naive implementation of discrete convolution. when a d - dimensional kernel can be expressed as the outer product of d vectors, one vector per dimension, the kernel is called separable. when the kernel is separable, naive convolution is [UNK]. it is equivalent to compose d one - dimensional convolutions with each of these vectors. the composed approach is significantly faster than performing one d - dimensional convolution with their outer product. the kernel also takes fewer parameters to represent as vectors. if the kernel is w elements wide in each dimension, then naive multidimensional convolution requires o ( wd ) runtime and parameter storage space, while separable convolution requires o ( w d × ) runtime and parameter storage space. of course, not every convolution can be represented in this way. devising faster ways
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 377
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##d ) runtime and parameter storage space, while separable convolution requires o ( w d × ) runtime and parameter storage space. of course, not every convolution can be represented in this way. devising faster ways of performing convolution or approximate convolution without harming the accuracy of the model is an active area of research. even tech - niques that improve the [UNK] of only forward propagation are useful because in the commercial setting, it is typical to devote more resources to deployment of a network than to its training. 362
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 377
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 9. convolutional networks 9. 9 random or unsupervised features typically, the most expensive part of convolutional network training is learning the features. the output layer is usually relatively inexpensive due to the small number of features provided as input to this layer after passing through several layers of pooling. when performing supervised training with gradient descent, every gradient step requires a complete run of forward propagation and backward propagation through the entire network. one way to reduce the cost of convolutional network training is to use features that are not trained in a supervised fashion. there are three basic strategies for obtaining convolution kernels without supervised training. one is to simply initialize them randomly. another is to design them by hand, for example by setting each kernel to detect edges at a certain orientation or scale. finally, one can learn the kernels with an unsupervised criterion. for example, ( ) apply coates et al. 2011 k - means clustering to small image patches, then use each learned centroid as a convolution kernel. part iii describes many more unsupervised learning approaches. learning the features with an unsupervised criterion allows them to be determined separately from the
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 378
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
small image patches, then use each learned centroid as a convolution kernel. part iii describes many more unsupervised learning approaches. learning the features with an unsupervised criterion allows them to be determined separately from the classifier layer at the top of the architecture. one can then extract the features for the entire training set just once, essentially constructing a new training set for the last layer. learning the last layer is then typically a convex optimization problem, assuming the last layer is something like logistic regression or an svm. random filters often work surprisingly well in convolutional networks ( jarrett et al. et al. et al., ; 2009 saxe, ; 2011 pinto, ; 2011 cox and pinto 2011 saxe, ). et al. ( ) showed that layers consisting of convolution following by pooling naturally 2011 become frequency selective and translation invariant when assigned random weights. they argue that this provides an inexpensive way to choose the architecture of a convolutional network : first evaluate the performance of several convolutional network architectures by training only the last layer, then take the best of these architectures and train the entire architecture using a more expensive approach. an intermediate approach is
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 378
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##lutional network : first evaluate the performance of several convolutional network architectures by training only the last layer, then take the best of these architectures and train the entire architecture using a more expensive approach. an intermediate approach is to learn the features, but using methods that do not require full forward and back - propagation at every gradient step. as with multilayer perceptrons, we use greedy layer - wise pretraining, to train the first layer in isolation, then extract all features from the first layer only once, then train the second layer in isolation given those features, and so on. chapter has described 8 how to perform supervised greedy layer - wise pretraining, and part extends this iii to greedy layer - wise pretraining using an unsupervised criterion at each layer. the canonical example of greedy layer - wise pretraining of a convolutional model is the convolutional deep belief network (, ). convolutional networks [UNK] lee et al. 2009 363
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 378
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 9. convolutional networks us the opportunity to take the pretraining strategy one step further than is possible with multilayer perceptrons. instead of training an entire convolutional layer at a time, we can train a model of a small patch, as ( ) do with coates et al. 2011 k - means. we can then use the parameters from this patch - based model to define the kernels of a convolutional layer. this means that it is possible to use unsupervised learning to train a convolutional network without ever using convolution during the training process. using this approach, we can train very large models and incur a high computational cost only at inference time (, ;, ranzato et al. 2007b jarrett et al. 2009 kavukcuoglu 2010 coates 2013 ; et al., ; et al., ). this approach was popular from roughly 2007 – 2013, when labeled datasets were small and computational power was more limited. today, most convolutional networks are trained in a purely supervised fashion, using full forward and back - propagation through the entire network on each training iteration. as with other
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 379
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
when labeled datasets were small and computational power was more limited. today, most convolutional networks are trained in a purely supervised fashion, using full forward and back - propagation through the entire network on each training iteration. as with other approaches to unsupervised pretraining, it remains [UNK] to tease apart the cause of some of the benefits seen with this approach. unsupervised pretraining may [UNK] some regularization relative to supervised training, or it may simply allow us to train much larger architectures due to the reduced computational cost of the learning rule. 9. 10 the neuroscientific basis for convolutional net - works convolutional networks are perhaps the greatest success story of biologically inspired artificial intelligence. though convolutional networks have been guided by many other fields, some of the key design principles of neural networks were drawn from neuroscience. the history of convolutional networks begins with neuroscientific experiments long before the relevant computational models were developed. neurophysiologists david hubel and torsten wiesel collaborated for several years to determine many of the most basic facts about how the mammalian vision
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 379
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
begins with neuroscientific experiments long before the relevant computational models were developed. neurophysiologists david hubel and torsten wiesel collaborated for several years to determine many of the most basic facts about how the mammalian vision system works ( hubel and wiesel 1959 1962 1968,,, ). their accomplishments were eventually recognized with a nobel prize. their findings that have had the greatest influence on contemporary deep learning models were based on recording the activity of individual neurons in cats. they observed how neurons in the cat ’ s brain responded to images projected in precise locations on a screen in front of the cat. their great discovery was that neurons in the early visual system responded most strongly to very specific patterns of light, such as precisely oriented bars, but responded hardly at all to other patterns. 364
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 379
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 9. convolutional networks their work helped to characterize many aspects of brain function that are beyond the scope of this book. from the point of view of deep learning, we can focus on a simplified, cartoon view of brain function. in this simplified view, we focus on a part of the brain called v1, also known as the primary visual cortex. v1 is the first area of the brain that begins to perform significantly advanced processing of visual input. in this cartoon view, images are formed by light arriving in the eye and stimulating the retina, the light - sensitive tissue in the back of the eye. the neurons in the retina perform some simple preprocessing of the image but do not substantially alter the way it is represented. the image then passes through the optic nerve and a brain region called the lateral geniculate nucleus. the main role, as far as we are concerned here, of both of these anatomical regions is primarily just to carry the signal from the eye to v1, which is located at the back of the head. a convolutional network layer is designed to capture three properties of v1 : 1. v1 is arranged in a spatial
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 380
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
primarily just to carry the signal from the eye to v1, which is located at the back of the head. a convolutional network layer is designed to capture three properties of v1 : 1. v1 is arranged in a spatial map. it actually has a two - dimensional structure mirroring the structure of the image in the retina. for example, light arriving at the lower half of the retina [UNK] only the corresponding half of v1. convolutional networks capture this property by having their features defined in terms of two dimensional maps. 2. v1 contains many simple cells. a simple cell ’ s activity can to some extent be characterized by a linear function of the image in a small, spatially localized receptive field. the detector units of a convolutional network are designed to emulate these properties of simple cells. 3. v1 also contains many complex cells. these cells respond to features that are similar to those detected by simple cells, but complex cells are invariant to small shifts in the position of the feature. this inspires the pooling units of convolutional networks. complex cells are also invariant to some changes in lighting that cannot be captured simply by pooling over spatial locations.
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 380
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
cells are invariant to small shifts in the position of the feature. this inspires the pooling units of convolutional networks. complex cells are also invariant to some changes in lighting that cannot be captured simply by pooling over spatial locations. these invariances have inspired some of the cross - channel pooling strategies in convolutional networks, such as maxout units (, ). goodfellow et al. 2013a though we know the most about v1, it is generally believed that the same basic principles apply to other areas of the visual system. in our cartoon view of the visual system, the basic strategy of detection followed by pooling is repeatedly applied as we move deeper into the brain. as we pass through multiple anatomical layers of the brain, we eventually find cells that respond to some specific concept and are invariant to many transformations of the input. these cells have been 365
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 380
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 9. convolutional networks nicknamed “ grandmother cells ” — the idea is that a person could have a neuron that activates when seeing an image of their grandmother, regardless of whether she appears in the left or right side of the image, whether the image is a close - up of her face or zoomed out shot of her entire body, whether she is brightly lit, or in shadow, etc. these grandmother cells have been shown to actually exist in the human brain, in a region called the medial temporal lobe (, ). researchers quiroga et al. 2005 tested whether individual neurons would respond to photos of famous individuals. they found what has come to be called the “ halle berry neuron ” : an individual neuron that is activated by the concept of halle berry. this neuron fires when a person sees a photo of halle berry, a drawing of halle berry, or even text containing the words “ halle berry. ” of course, this has nothing to do with halle berry herself ; other neurons responded to the presence of bill clinton, jennifer aniston, etc. these medial temporal lobe neurons are somewhat more general than modern convolutional networks, which would not automatically generalize to identifying a person or object when reading its
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 381
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
other neurons responded to the presence of bill clinton, jennifer aniston, etc. these medial temporal lobe neurons are somewhat more general than modern convolutional networks, which would not automatically generalize to identifying a person or object when reading its name. the closest analog to a convolutional network ’ s last layer of features is a brain area called the inferotemporal cortex ( it ). when viewing an object, information flows from the retina, through the lgn, to v1, then onward to v2, then v4, then it. this happens within the first 100ms of glimpsing an object. if a person is allowed to continue looking at the object for more time, then information will begin to flow backwards as the brain uses top - down feedback to update the activations in the lower level brain areas. however, if we interrupt the person ’ s gaze, and observe only the firing rates that result from the first 100ms of mostly feedforward activation, then it proves to be very similar to a convolutional network. convolutional networks can predict it firing rates, and also perform very similarly to ( time limited ) humans on object recognition
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 381
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
of mostly feedforward activation, then it proves to be very similar to a convolutional network. convolutional networks can predict it firing rates, and also perform very similarly to ( time limited ) humans on object recognition tasks (, ). dicarlo 2013 that being said, there are many [UNK] between convolutional networks and the mammalian vision system. some of these [UNK] are well known to computational neuroscientists, but outside the scope of this book. some of these [UNK] are not yet known, because many basic questions about how the mammalian vision system works remain unanswered. as a brief list : • the human eye is mostly very low resolution, except for a tiny patch called the fovea. the fovea only observes an area about the size of a thumbnail held at arms length. though we feel as if we can see an entire scene in high resolution, this is an illusion created by the subconscious part of our brain, as it stitches together several glimpses of small areas. most convolutional networks actually receive large full resolution photographs as input. the human brain makes 366
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 381
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 9. convolutional networks several eye movements called saccades to glimpse the most visually salient or task - relevant parts of a scene. incorporating similar attention mechanisms into deep learning models is an active research direction. in the context of deep learning, attention mechanisms have been most successful for natural language processing, as described in section. several visual models 12. 4. 5. 1 with foveation mechanisms have been developed but so far have not become the dominant approach ( larochelle and hinton 2010 denil 2012, ; et al., ). • the human visual system is integrated with many other senses, such as hearing, and factors like our moods and thoughts. convolutional networks so far are purely visual. • the human visual system does much more than just recognize objects. it is able to understand entire scenes including many objects and relationships between objects, and processes rich 3 - d geometric information needed for our bodies to interface with the world. convolutional networks have been applied to some of these problems but these applications are in their infancy. • even simple brain areas like v1 are heavily impacted by feedback from higher levels. feedback has been explored extensively in neural network models but has not yet been shown to [UNK] a compelling
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 382
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
applied to some of these problems but these applications are in their infancy. • even simple brain areas like v1 are heavily impacted by feedback from higher levels. feedback has been explored extensively in neural network models but has not yet been shown to [UNK] a compelling improvement. • while feedforward it firing rates capture much of the same information as convolutional network features, it is not clear how similar the intermediate computations are. the brain probably uses very [UNK] activation and pooling functions. an individual neuron ’ s activation probably is not well - characterized by a single linear filter response. a recent model of v1 involves multiple quadratic filters for each neuron (, ). indeed our rust et al. 2005 cartoon picture of “ simple cells ” and “ complex cells ” might create a non - existent distinction ; simple cells and complex cells might both be the same kind of cell but with their “ parameters ” enabling a continuum of behaviors ranging from what we call “ simple ” to what we call “ complex. ” it is also worth mentioning that neuroscience has told us relatively little about how to train convolutional networks. model structures with parameter sharing across multiple spatial locations date back to early connectionist models of vision (, ), but these
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 382
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
complex. ” it is also worth mentioning that neuroscience has told us relatively little about how to train convolutional networks. model structures with parameter sharing across multiple spatial locations date back to early connectionist models of vision (, ), but these models did not use the modern marr and poggio 1976 back - propagation algorithm and gradient descent. for example, the neocognitron ( fukushima 1980, ) incorporated most of the model architecture design elements of the modern convolutional network but relied on a layer - wise unsupervised clustering algorithm. 367
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 382
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 9. convolutional networks lang and hinton 1988 ( ) introduced the use of back - propagation to train time - delay neural networks ( tdnns ). to use contemporary terminology, tdnns are one - dimensional convolutional networks applied to time series. back - propagation applied to these models was not inspired by any neuroscientific observa - tion and is considered by some to be biologically implausible. following the success of back - propagation - based training of tdnns, (, ) developed lecun et al. 1989 the modern convolutional network by applying the same training algorithm to 2 - d convolution applied to images. so far we have described how simple cells are roughly linear and selective for certain features, complex cells are more nonlinear and become invariant to some transformations of these simple cell features, and stacks of layers that alternate between selectivity and invariance can yield grandmother cells for very specific phenomena. we have not yet described precisely what these individual cells detect. in a deep, nonlinear network, it can be [UNK] to understand the function of individual cells. simple cells in the first layer are easier to analyze, because their responses are driven
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 383
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.