text
stringlengths
35
1.54k
source
stringclasses
1 value
page
int64
1
800
book
stringclasses
1 value
chunk_index
int64
0
0
we can write a kernel function k ( x, x ( ) i ) = min ( x, x ( ) i ) that is exactly equivalent to the corresponding infinite - dimensional dot product. the most commonly used kernel is the gaussian kernel k,, σ ( u v u v ) = ( n − ; 0 2i ) ( 5. 84 ) where n ( x ; µ, σ ) is the standard normal density. this kernel is also known as the radial basis function ( rbf ) kernel, because its value decreases along lines in v space radiating outward from u. the gaussian kernel corresponds to a dot product in an infinite - dimensional space, but the derivation of this space is less straightforward than in our example of the kernel over the integers. min we can think of the gaussian kernel as performing a kind of template match - ing. a training example x associated with training label y becomes a template for class y. when a test point xis near x according to euclidean distance, the gaussian kernel has a large response, indicating that xis very similar to the x template. the model then puts a large weight on the associated training label y. overall, the prediction will combine many such training labels weighted
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
157
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
to euclidean distance, the gaussian kernel has a large response, indicating that xis very similar to the x template. the model then puts a large weight on the associated training label y. overall, the prediction will combine many such training labels weighted by the similarity of the corresponding training examples. support vector machines are not the only algorithm that can be enhanced using the kernel trick. many other linear models can be enhanced in this way. the category of algorithms that employ the kernel trick is known as kernel machines or kernel methods (, ; williams and rasmussen 1996 scholkopf 1999 et al., ). a major drawback to kernel machines is that the cost of evaluating the decision function is linear in the number of training examples, because the i - th example contributes a term αik ( x x, ( ) i ) to the decision function. support vector machines are able to mitigate this by learning an α vector that contains mostly zeros. classifying a new example then requires evaluating the kernel function only for the training examples that have non - zero α i. these training examples are known 142
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
157
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics as support vectors. kernel machines also [UNK] from a high computational cost of training when the dataset is large. we will revisit this idea in section. kernel machines with 5. 9 generic kernels struggle to generalize well. we will explain why in section. the 5. 11 modern incarnation of deep learning was designed to overcome these limitations of kernel machines. the current deep learning renaissance began when hinton et al. ( ) demonstrated that a neural network could outperform the rbf kernel svm 2006 on the mnist benchmark. 5. 7. 3 other simple supervised learning algorithms we have already briefly encountered another non - probabilistic supervised learning algorithm, nearest neighbor regression. more generally, k - nearest neighbors is a family of techniques that can be used for classification or regression. as a non - parametric learning algorithm, k - nearest neighbors is not restricted to a fixed number of parameters. we usually think of the k - nearest neighbors algorithm as not having any parameters, but rather implementing a simple function of the training data. in fact, there is not even really a training stage or learning process. instead, at test time, when we want to produce an output y for a new
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
158
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
algorithm as not having any parameters, but rather implementing a simple function of the training data. in fact, there is not even really a training stage or learning process. instead, at test time, when we want to produce an output y for a new test input x, we find the k - nearest neighbors to x in the training data x. we then return the average of the corresponding y values in the training set. this works for essentially any kind of supervised learning where we can define an average over y values. in the case of classification, we can average over one - hot code vectors c with cy = 1 and ci = 0 for all other values of i. we can then interpret the average over these one - hot codes as giving a probability distribution over classes. as a non - parametric learning algorithm, k - nearest neighbor can achieve very high capacity. for example, suppose we have a multiclass classification task and measure performance with 0 - 1 loss. in this setting, - nearest neighbor converges to double the bayes error as the 1 number of training examples approaches infinity. the error in excess of the bayes error results from choosing a single neighbor by breaking ties between equally distant neighbors randomly. when there is
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
158
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
- nearest neighbor converges to double the bayes error as the 1 number of training examples approaches infinity. the error in excess of the bayes error results from choosing a single neighbor by breaking ties between equally distant neighbors randomly. when there is infinite training data, all test points x will have infinitely many training set neighbors at distance zero. if we allow the algorithm to use all of these neighbors to vote, rather than randomly choosing one of them, the procedure converges to the bayes error rate. the high capacity of k - nearest neighbors allows it to obtain high accuracy given a large training set. however, it does so at high computational cost, and it may generalize very badly given a small, finite training set. one weakness of k - nearest neighbors is that it cannot learn that one feature is more discriminative than another. for example, imagine we have a regression task with x ∈r100 drawn from an isotropic gaussian 143
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
158
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics distribution, but only a single variable x1 is relevant to the output. suppose further that this feature simply encodes the output directly, i. e. that y = x1 in all cases. nearest neighbor regression will not be able to detect this simple pattern. the nearest neighbor of most points x will be determined by the large number of features x2 through x100, not by the lone feature x1. thus the output on small training sets will essentially be random. 144
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
159
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics 0 1 01 111 0 1 011 1111 1110 110 10 010 00 1110 1111 110 10 01 00 010 011 11 111 11 figure 5. 7 : diagrams describing how a decision tree works. ( top ) each node of the tree chooses to send the input example to the child node on the left ( 0 ) or or the child node on the right ( 1 ). internal nodes are drawn as circles and leaf nodes as squares. each node is displayed with a binary string identifier corresponding to its position in the tree, obtained by appending a bit to its parent identifier ( 0 = choose left or top, 1 = choose right or bottom ). ( bottom ) the tree divides space into regions. the 2d plane shows how a decision tree might divide r2. the nodes of the tree are plotted in this plane, with each internal node drawn along the dividing line it uses to categorize examples, and leaf nodes drawn in the center of the region of examples they receive. the result is a piecewise - constant function, with one piece per leaf. each leaf requires at least one training example to define, so it is not possible for the decision tree to learn
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
160
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
the center of the region of examples they receive. the result is a piecewise - constant function, with one piece per leaf. each leaf requires at least one training example to define, so it is not possible for the decision tree to learn a function that has more local maxima than the number of training examples. 145
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
160
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics another type of learning algorithm that also breaks the input space into regions and has separate parameters for each region is the decision tree (, breiman et al. 1984 ) and its many variants. as shown in figure, each node of the decision 5. 7 tree is associated with a region in the input space, and internal nodes break that region into one sub - region for each child of the node ( typically using an axis - aligned cut ). space is thus sub - divided into non - overlapping regions, with a one - to - one correspondence between leaf nodes and input regions. each leaf node usually maps every point in its input region to the same output. decision trees are usually trained with specialized algorithms that are beyond the scope of this book. the learning algorithm can be considered non - parametric if it is allowed to learn a tree of arbitrary size, though decision trees are usually regularized with size constraints that turn them into parametric models in practice. decision trees as they are typically used, with axis - aligned splits and constant outputs within each node, struggle to solve some problems that are easy even for logistic regression. for example, if we have a two - class problem and the positive class occurs wherever x2 > x1
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
161
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
used, with axis - aligned splits and constant outputs within each node, struggle to solve some problems that are easy even for logistic regression. for example, if we have a two - class problem and the positive class occurs wherever x2 > x1, the decision boundary is not axis - aligned. the decision tree will thus need to approximate the decision boundary with many nodes, implementing a step function that constantly walks back and forth across the true decision function with axis - aligned steps. as we have seen, nearest neighbor predictors and decision trees have many limitations. nonetheless, they are useful learning algorithms when computational resources are constrained. we can also build intuition for more sophisticated learning algorithms by thinking about the similarities and [UNK] between sophisticated algorithms and - nn or decision tree baselines. k see ( ), ( ), ( ) or other machine murphy 2012 bishop 2006 hastie et al. 2001 learning textbooks for more material on traditional supervised learning algorithms. 5. 8 unsupervised learning algorithms recall from section that unsupervised algorithms are those that experience 5. 1. 3 only “ features ” but not a supervision signal. the distinction between supervised and unsupervised algorithms is not formally and rigidly defined because there is no objective test for distinguishing
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
161
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
##vised algorithms are those that experience 5. 1. 3 only “ features ” but not a supervision signal. the distinction between supervised and unsupervised algorithms is not formally and rigidly defined because there is no objective test for distinguishing whether a value is a feature or a target provided by a supervisor. informally, unsupervised learning refers to most attempts to extract information from a distribution that do not require human labor to annotate examples. the term is usually associated with density estimation, learning to draw samples from a distribution, learning to denoise data from some distribution, finding a manifold that the data lies near, or clustering the data into groups of 146
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
161
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics related examples. a classic unsupervised learning task is to find the “ best ” representation of the data. by ‘ best ’ we can mean [UNK] things, but generally speaking we are looking for a representation that preserves as much information about x as possible while obeying some penalty or constraint aimed at keeping the representation or simpler more accessible than itself. x there are multiple ways of defining a representation. three of the simpler most common include lower dimensional representations, sparse representations and independent representations. low - dimensional representations attempt to compress as much information about x as possible in a smaller representation. sparse representations (, ;, ; barlow 1989 olshausen and field 1996 hinton and ghahramani 1997, ) embed the dataset into a representation whose entries are mostly zeroes for most inputs. the use of sparse representations typically requires increasing the dimensionality of the representation, so that the representation becoming mostly zeroes does not discard too much information. this results in an overall structure of the representation that tends to distribute data along the axes of the representation space. independent representations attempt to disentangle the sources of variation underlying the data distribution such that the dimensions of the representation are statistically independent. of course these three criteria
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
162
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
overall structure of the representation that tends to distribute data along the axes of the representation space. independent representations attempt to disentangle the sources of variation underlying the data distribution such that the dimensions of the representation are statistically independent. of course these three criteria are certainly not mutually exclusive. low - dimensional representations often yield elements that have fewer or weaker de - pendencies than the original high - dimensional data. this is because one way to reduce the size of a representation is to find and remove redundancies. identifying and removing more redundancy allows the dimensionality reduction algorithm to achieve more compression while discarding less information. the notion of representation is one of the central themes of deep learning and therefore one of the central themes in this book. in this section, we develop some simple examples of representation learning algorithms. together, these example algorithms show how to operationalize all three of the criteria above. most of the remaining chapters introduce additional representation learning algorithms that develop these criteria in [UNK] ways or introduce other criteria. 5. 8. 1 principal components analysis in section, we saw that the principal components analysis algorithm provides 2. 12 a means of compressing data. we can also view pca as an unsupervised learning algorithm that learns a representation of data. this
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
162
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
. 1 principal components analysis in section, we saw that the principal components analysis algorithm provides 2. 12 a means of compressing data. we can also view pca as an unsupervised learning algorithm that learns a representation of data. this representation is based on two of the criteria for a simple representation described above. pca learns a 147
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
162
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics − − 20 10 0 10 20 x1 −20 −10 0 10 20 x2 − − 20 10 0 10 20 z1 −20 −10 0 10 20 z2 figure 5. 8 : pca learns a linear projection that aligns the direction of greatest variance with the axes of the new space. ( left ) the original data consists of samples ofx. in this space, the variance might occur along directions that are not axis - aligned. ( right ) the transformed data z = xw now varies most along the axis z1. the direction of second most variance is now along z2. representation that has lower dimensionality than the original input. it also learns a representation whose elements have no linear correlation with each other. this is a first step toward the criterion of learning representations whose elements are statistically independent. to achieve full independence, a representation learning algorithm must also remove the nonlinear relationships between variables. pca learns an orthogonal, linear transformation of the data that projects an input x to a representation z as shown in figure. in section, we saw that 5. 8 2. 12 we could learn a one - dimensional representation that best reconstructs the original data ( in the sense of mean squared error
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
163
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
input x to a representation z as shown in figure. in section, we saw that 5. 8 2. 12 we could learn a one - dimensional representation that best reconstructs the original data ( in the sense of mean squared error ) and that this representation actually corresponds to the first principal component of the data. thus we can use pca as a simple and [UNK] dimensionality reduction method that preserves as much of the information in the data as possible ( again, as measured by least - squares reconstruction error ). in the following, we will study how the pca representation decorrelates the original data representation. x let us consider the m n × - dimensional design matrix x. we will assume that the data has a mean of zero, e [ x ] = 0. if this is not the case, the data can easily be centered by subtracting the mean from all examples in a preprocessing step. the unbiased sample covariance matrix associated with is given by : x var [ ] = x 1 m −1x x. ( 5. 85 ) 148
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
163
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics pca finds a representation ( through linear transformation ) z = xw where var [ ] z is diagonal. in section, we saw that the principal components of a design matrix 2. 12 x are given by the eigenvectors of xx. from this view, xx w w = λ. ( 5. 86 ) in this section, we exploit an alternative derivation of the principal components. the principal components may also be obtained via the singular value decomposition. specifically, they are the right singular vectors of x. to see this, let w be the right singular vectors in the decomposition x = u w σ. we then recover the original eigenvector equation with as the eigenvector basis : w x x = u w σ u w σ = wσ2w. ( 5. 87 ) the svd is helpful to show that pca results in a diagonal var [ z ]. using the svd of, we can express the variance of as : x x var [ ] = x 1 m −1xx ( 5. 88 ) = 1 m −1 ( u w σ ) u w σ ( 5. 89 ) = 1 m −1wσu u w σ ( 5.
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
164
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
: x x var [ ] = x 1 m −1xx ( 5. 88 ) = 1 m −1 ( u w σ ) u w σ ( 5. 89 ) = 1 m −1wσu u w σ ( 5. 90 ) = 1 m −1wσ2 w, ( 5. 91 ) where we use the fact that uu = i because the u matrix of the singular value decomposition is defined to be orthogonal. this shows that if we take z = xw, we can ensure that the covariance of is diagonal as required : z var [ ] = z 1 m −1 zz ( 5. 92 ) = 1 m −1 w x xw ( 5. 93 ) = 1 m −1 w wσ2w w ( 5. 94 ) = 1 m −1 σ2, ( 5. 95 ) where this time we use the fact that w w = i, again from the definition of the svd. 149
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
164
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics the above analysis shows that when we project the data x to z, via the linear transformation w, the resulting representation has a diagonal covariance matrix ( as given by σ2 ) which immediately implies that the individual elements of z are mutually uncorrelated. this ability of pca to transform data into a representation where the elements are mutually uncorrelated is a very important property of pca. it is a simple example of a representation that attempts to disentangle the unknown factors of variation underlying the data. in the case of pca, this disentangling takes the form of finding a rotation of the input space ( described by w ) that aligns the principal axes of variance with the basis of the new representation space associated with. z while correlation is an important category of dependency between elements of the data, we are also interested in learning representations that disentangle more complicated forms of feature dependencies. for this, we will need more than what can be done with a simple linear transformation. 5. 8. 2 - means clustering k another example of a simple representation learning algorithm isk - means clustering. the k - means clustering algorithm divides the training set intok [UNK] clusters of examples that are near
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
165
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
a simple linear transformation. 5. 8. 2 - means clustering k another example of a simple representation learning algorithm isk - means clustering. the k - means clustering algorithm divides the training set intok [UNK] clusters of examples that are near each other. we can thus think of the algorithm as providing a k - dimensional one - hot code vector h representing an input x. if x belongs to cluster i, then hi = 1 and all other entries of the representation h are zero. the one - hot code provided by k - means clustering is an example of a sparse representation, because the majority of its entries are zero for every input. later, we will develop other algorithms that learn more flexible sparse representations, where more than one entry can be non - zero for each input x. one - hot codes are an extreme example of sparse representations that lose many of the benefits of a distributed representation. the one - hot code still confers some statistical advantages ( it naturally conveys the idea that all examples in the same cluster are similar to each other ) and it confers the computational advantage that the entire representation may be captured by a single integer. the k - means algorithm works by initializingk [UNK] centroids { µ ( 1 ),
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
165
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
all examples in the same cluster are similar to each other ) and it confers the computational advantage that the entire representation may be captured by a single integer. the k - means algorithm works by initializingk [UNK] centroids { µ ( 1 ),..., µ ( ) k } to [UNK] values, then alternating between two [UNK] steps until convergence. in one step, each training example is assigned to cluster i, where i is the index of the nearest centroid µ ( ) i. in the other step, each centroid µ ( ) i is updated to the mean of all training examples x ( ) j assigned to cluster. i 150
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
165
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics one [UNK] pertaining to clustering is that the clustering problem is inherently ill - posed, in the sense that there is no single criterion that measures how well a clustering of the data corresponds to the real world. we can measure properties of the clustering such as the average euclidean distance from a cluster centroid to the members of the cluster. this allows us to tell how well we are able to reconstruct the training data from the cluster assignments. we do not know how well the cluster assignments correspond to properties of the real world. moreover, there may be many [UNK] clusterings that all correspond well to some property of the real world. we may hope to find a clustering that relates to one feature but obtain a [UNK], equally valid clustering that is not relevant to our task. for example, suppose that we run two clustering algorithms on a dataset consisting of images of red trucks, images of red cars, images of gray trucks, and images of gray cars. if we ask each clustering algorithm to find two clusters, one algorithm may find a cluster of cars and a cluster of trucks, while another may find a cluster of red vehicles and a cluster of gray vehicles. suppose we also run a third cluster
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
166
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
ask each clustering algorithm to find two clusters, one algorithm may find a cluster of cars and a cluster of trucks, while another may find a cluster of red vehicles and a cluster of gray vehicles. suppose we also run a third clustering algorithm, which is allowed to determine the number of clusters. this may assign the examples to four clusters, red cars, red trucks, gray cars, and gray trucks. this new clustering now at least captures information about both attributes, but it has lost information about similarity. red cars are in a [UNK] cluster from gray cars, just as they are in a [UNK] cluster from gray trucks. the output of the clustering algorithm does not tell us that red cars are more similar to gray cars than they are to gray trucks. they are [UNK] from both things, and that is all we know. these issues illustrate some of the reasons that we may prefer a distributed representation to a one - hot representation. a distributed representation could have two attributes for each vehicle — one representing its color and one representing whether it is a car or a truck. it is still not entirely clear what the optimal distributed representation is ( how can the learning algorithm know whether the two attributes we are interested in are color and car - versus - truck rather than manufacturer
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
166
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
and one representing whether it is a car or a truck. it is still not entirely clear what the optimal distributed representation is ( how can the learning algorithm know whether the two attributes we are interested in are color and car - versus - truck rather than manufacturer and age? ) but having many attributes reduces the burden on the algorithm to guess which single attribute we care about, and allows us to measure similarity between objects in a fine - grained way by comparing many attributes instead of just testing whether one attribute matches. 5. 9 stochastic gradient descent nearly all of deep learning is powered by one very important algorithm : stochastic gradient descent or sgd. stochastic gradient descent is an extension of the 151
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
166
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics gradient descent algorithm introduced in section. 4. 3 a recurring problem in machine learning is that large training sets are necessary for good generalization, but large training sets are also more computationally expensive. the cost function used by a machine learning algorithm often decomposes as a sum over training examples of some per - example loss function. for example, the negative conditional log - likelihood of the training data can be written as j ( ) = θ ex, [UNK], y, ( x θ ) = 1 m m i = 1 l ( x ( ) i, y ( ) i, θ ) ( 5. 96 ) where is the per - example loss l l, y, p y. ( x θ ) = log − ( | x θ ; ) for these additive cost functions, gradient descent requires computing ∇θj ( ) = θ 1 m m i = 1 ∇θl ( x ( ) i, y ( ) i,. θ ) ( 5. 97 ) the computational cost of this operation is o ( m ). as the training set size grows to billions of examples, the time to take a single gradient step becomes prohibitively long. the insight of stochastic gradient descent is that the gradient is an expectation. the
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
167
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
of this operation is o ( m ). as the training set size grows to billions of examples, the time to take a single gradient step becomes prohibitively long. the insight of stochastic gradient descent is that the gradient is an expectation. the expectation may be approximately estimated using a small set of samples. specifically, on each step of the algorithm, we can sample a minibatch of examples b = { x ( 1 ),..., x ( m ) } drawn uniformly from the training set. the minibatch size mis typically chosen to be a relatively small number of examples, ranging from 1 to a few hundred. crucially, mis usually held fixed as the training set size m grows. we may fit a training set with billions of examples using updates computed on only a hundred examples. the estimate of the gradient is formed as g = 1 m∇θ m i = 1 l ( x ( ) i, y ( ) i,. θ ) ( 5. 98 ) using examples from the minibatch. the stochastic gradient descent algorithm b then follows the estimated gradient downhill : θ θ g ← −, ( 5. 99 ) where is the learning rate. 152
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
167
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics gradient descent in general has often been regarded as slow or unreliable. in the past, the application of gradient descent to non - convex optimization problems was regarded as foolhardy or unprincipled. today, we know that the machine learning models described in part work very well when trained with gradient ii descent. the optimization algorithm may not be guaranteed to arrive at even a local minimum in a reasonable amount of time, but it often finds a very low value of the cost function quickly enough to be useful. stochastic gradient descent has many important uses outside the context of deep learning. it is the main way to train large linear models on very large datasets. for a fixed model size, the cost per sgd update does not depend on the training set size m. in practice, we often use a larger model as the training set size increases, but we are not forced to do so. the number of updates required to reach convergence usually increases with training set size. however, as m approaches infinity, the model will eventually converge to its best possible test error before sgd has sampled every example in the training set. increasing m further will not extend the amount of training time needed to reach the model ’ s best
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
168
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
, as m approaches infinity, the model will eventually converge to its best possible test error before sgd has sampled every example in the training set. increasing m further will not extend the amount of training time needed to reach the model ’ s best possible test error. from this point of view, one can argue that the asymptotic cost of training a model with sgd is as a function of. o ( 1 ) m prior to the advent of deep learning, the main way to learn nonlinear models was to use the kernel trick in combination with a linear model. many kernel learning algorithms require constructing an m m × matrix gi, j = k ( x ( ) i, x ( ) j ). constructing this matrix has computational cost o ( m2 ), which is clearly undesirable for datasets with billions of examples. in academia, starting in 2006, deep learning was initially interesting because it was able to generalize to new examples better than competing algorithms when trained on medium - sized datasets with tens of thousands of examples. soon after, deep learning garnered additional interest in industry, because it provided a scalable way of training nonlinear models on large datasets. stochastic gradient descent and many enhancements to it are described further in
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
168
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
with tens of thousands of examples. soon after, deep learning garnered additional interest in industry, because it provided a scalable way of training nonlinear models on large datasets. stochastic gradient descent and many enhancements to it are described further in chapter. 8 5. 10 building a machine learning algorithm nearly all deep learning algorithms can be described as particular instances of a fairly simple recipe : combine a specification of a dataset, a cost function, an optimization procedure and a model. for example, the linear regression algorithm combines a dataset consisting of 153
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
168
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics x y and, the cost function j, b ( w ) = −ex, [UNK] log pmodel ( ) y | x, ( 5. 100 ) the model specification pmodel ( y | x ) = n ( y ; xw + b, 1 ), and, in most cases, the optimization algorithm defined by solving for where the gradient of the cost is zero using the normal equations. by realizing that we can replace any of these components mostly independently from the others, we can obtain a very wide variety of algorithms. the cost function typically includes at least one term that causes the learning process to perform statistical estimation. the most common cost function is the negative log - likelihood, so that minimizing the cost function causes maximum likelihood estimation. the cost function may also include additional terms, such as regularization terms. for example, we can add weight decay to the linear regression cost function to obtain j, b λ ( w ) = | | | | w 2 2 −ex, [UNK] log pmodel ( ) y | x. ( 5. 101 ) this still allows closed - form optimization. if we change the model to be nonlinear, then most cost functions can no longer be optimi
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
169
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
| w 2 2 −ex, [UNK] log pmodel ( ) y | x. ( 5. 101 ) this still allows closed - form optimization. if we change the model to be nonlinear, then most cost functions can no longer be optimized in closed form. this requires us to choose an iterative numerical optimization procedure, such as gradient descent. the recipe for constructing a learning algorithm by combining models, costs, and optimization algorithms supports both supervised and unsupervised learning. the linear regression example shows how to support supervised learning. unsupervised learning can be supported by defining a dataset that contains only x and providing an appropriate unsupervised cost and model. for example, we can obtain the first pca vector by specifying that our loss function is j ( ) = w [UNK] | | − | | x r ( ; ) x w 2 2 ( 5. 102 ) while our model is defined to have w with norm one and reconstruction function r ( ) = x wxw. in some cases, the cost function may be a function that we cannot actually evaluate, for computational reasons. in these cases, we can still approximately minimize it using iterative numerical optimization so long as we have some way
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
169
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
) = x wxw. in some cases, the cost function may be a function that we cannot actually evaluate, for computational reasons. in these cases, we can still approximately minimize it using iterative numerical optimization so long as we have some way of approximating its gradients. most machine learning algorithms make use of this recipe, though it may not immediately be obvious. if a machine learning algorithm seems especially unique or 154
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
169
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics hand - designed, it can usually be understood as using a special - case optimizer. some models such as decision trees or k - means require special - case optimizers because their cost functions have flat regions that make them inappropriate for minimization by gradient - based optimizers. recognizing that most machine learning algorithms can be described using this recipe helps to see the [UNK] algorithms as part of a taxonomy of methods for doing related tasks that work for similar reasons, rather than as a long list of algorithms that each have separate justifications. 5. 11 challenges motivating deep learning the simple machine learning algorithms described in this chapter work very well on a wide variety of important problems. however, they have not succeeded in solving the central problems in ai, such as recognizing speech or recognizing objects. the development of deep learning was motivated in part by the failure of traditional algorithms to generalize well on such ai tasks. this section is about how the challenge of generalizing to new examples becomes exponentially more [UNK] when working with high - dimensional data, and how the mechanisms used to achieve generalization in traditional machine learning are [UNK] to learn complicated functions in high - dimensional spaces. such spaces also often impose high computational costs. deep learning
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
170
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
examples becomes exponentially more [UNK] when working with high - dimensional data, and how the mechanisms used to achieve generalization in traditional machine learning are [UNK] to learn complicated functions in high - dimensional spaces. such spaces also often impose high computational costs. deep learning was designed to overcome these and other obstacles. 5. 11. 1 the curse of dimensionality many machine learning problems become exceedingly [UNK] when the number of dimensions in the data is high. this phenomenon is known as the curse of dimensionality. of particular concern is that the number of possible distinct configurations of a set of variables increases exponentially as the number of variables increases. 155
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
170
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics figure 5. 9 : as the number of relevant dimensions of the data increases ( from left to right ), the number of configurations of interest may grow exponentially. ( left ) in this one - dimensional example, we have one variable for which we only care to distinguish 10 regions of interest. with enough examples falling within each of these regions ( each region corresponds to a cell in the illustration ), learning algorithms can easily generalize correctly. a straightforward way to generalize is to estimate the value of the target function within each region ( and possibly interpolate between neighboring regions ). with 2 ( center ) dimensions it is more [UNK] to distinguish 10 [UNK] values of each variable. we need to keep track of up to 10×10 = 100 regions, and we need at least that many examples to cover all those regions. with 3 dimensions this grows to ( right ) 103 = 1000 regions and at least that many examples. for d dimensions and v values to be distinguished along each axis, we seem to need o ( v d ) regions and examples. this is an instance of the curse of dimensionality. figure graciously provided by nicolas chapados. the curse of dimensionality arises in many places in computer science, and
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
171
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
, we seem to need o ( v d ) regions and examples. this is an instance of the curse of dimensionality. figure graciously provided by nicolas chapados. the curse of dimensionality arises in many places in computer science, and especially so in machine learning. one challenge posed by the curse of dimensionality is a statistical challenge. as illustrated in figure, a statistical challenge arises because the number of 5. 9 possible configurations of x is much larger than the number of training examples. to understand the issue, let us consider that the input space is organized into a grid, like in the figure. we can describe low - dimensional space with a low number of grid cells that are mostly occupied by the data. when generalizing to a new data point, we can usually tell what to do simply by inspecting the training examples that lie in the same cell as the new input. for example, if estimating the probability density at some point x, we can just return the number of training examples in the same unit volume cell as x, divided by the total number of training examples. if we wish to classify an example, we can return the most common class of training examples in the same cell. if we are doing regression we can
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
171
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
training examples in the same unit volume cell as x, divided by the total number of training examples. if we wish to classify an example, we can return the most common class of training examples in the same cell. if we are doing regression we can average the target values observed over the examples in that cell. but what about the cells for which we have seen no example? because in high - dimensional spaces the number of configurations is huge, much larger than our number of examples, a typical grid cell has no training example associated with it. how could we possibly say something 156
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
171
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics meaningful about these new configurations? many traditional machine learning algorithms simply assume that the output at a new point should be approximately the same as the output at the nearest training point. 5. 11. 2 local constancy and smoothness regularization in order to generalize well, machine learning algorithms need to be guided by prior beliefs about what kind of function they should learn. previously, we have seen these priors incorporated as explicit beliefs in the form of probability distributions over parameters of the model. more informally, we may also discuss prior beliefs as directly influencing the itself and only indirectly acting on the parameters function via their [UNK] on the function. additionally, we informally discuss prior beliefs as being expressed implicitly, by choosing algorithms that are biased toward choosing some class of functions over another, even though these biases may not be expressed ( or even possible to express ) in terms of a probability distribution representing our degree of belief in various functions. among the most widely used of these implicit “ priors ” is the smoothness prior or local constancy prior. this prior states that the function we learn should not change very much within a small region. many simpler algorithms rely exclusively on this prior to generalize well, and as a
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
172
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
“ priors ” is the smoothness prior or local constancy prior. this prior states that the function we learn should not change very much within a small region. many simpler algorithms rely exclusively on this prior to generalize well, and as a result they fail to scale to the statistical challenges involved in solving ai - level tasks. throughout this book, we will describe how deep learning introduces additional ( explicit and implicit ) priors in order to reduce the generalization error on sophisticated tasks. here, we explain why the smoothness prior alone is [UNK] for these tasks. there are many [UNK] ways to implicitly or explicitly express a prior belief that the learned function should be smooth or locally constant. all of these [UNK] methods are designed to encourage the learning process to learn a function f∗that satisfies the condition f∗ ( ) x ≈f ∗ ( + ) x ( 5. 103 ) for most configurations x and small change. in other words, if we know a good answer for an input x ( for example, if x is a labeled training example ) then that answer is probably good in the neighborhood of x. if we have several good answers in some neighborhood we would combine them ( by some form of averaging or interpolation ) to
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
172
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
x ( for example, if x is a labeled training example ) then that answer is probably good in the neighborhood of x. if we have several good answers in some neighborhood we would combine them ( by some form of averaging or interpolation ) to produce an answer that agrees with as many of them as much as possible. an extreme example of the local constancy approach is the k - nearest neighbors family of learning algorithms. these predictors are literally constant over each 157
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
172
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics region containing all the points x that have the same set of k nearest neighbors in the training set. for k = 1, the number of distinguishable regions cannot be more than the number of training examples. while the k - nearest neighbors algorithm copies the output from nearby training examples, most kernel machines interpolate between training set outputs associated with nearby training examples. an important class of kernels is the family oflocal kernels where k ( u v, ) is large when u = v and decreases as u and v grow farther apart from each other. a local kernel can be thought of as a similarity function that performs template matching, by measuring how closely a test example x resembles each training example x ( ) i. much of the modern motivation for deep learning is derived from studying the limitations of local template matching and how deep models are able to succeed in cases where local template matching fails (, ). bengio et al. 2006b decision trees also [UNK] from the limitations of exclusively smoothness - based learning because they break the input space into as many regions as there are leaves and use a separate parameter ( or sometimes many parameters for extensions of decision trees ) in each region. if the target function requires a tree with at least n leaves to
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
173
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
- based learning because they break the input space into as many regions as there are leaves and use a separate parameter ( or sometimes many parameters for extensions of decision trees ) in each region. if the target function requires a tree with at least n leaves to be represented accurately, then at least n training examples are required to fit the tree. a multiple of n is needed to achieve some level of statistical confidence in the predicted output. in general, to distinguish o ( k ) regions in input space, all of these methods require o ( k ) examples. typically there are o ( k ) parameters, with o ( 1 ) parameters associated with each of the o ( k ) regions. the case of a nearest neighbor scenario, where each training example can be used to define at most one region, is illustrated in figure. 5. 10 is there a way to represent a complex function that has many more regions to be distinguished than the number of training examples? clearly, assuming only smoothness of the underlying function will not allow a learner to do that. for example, imagine that the target function is a kind of checkerboard. a checkerboard contains many variations but there is a simple structure to them. imagine what happens when the number
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
173
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
underlying function will not allow a learner to do that. for example, imagine that the target function is a kind of checkerboard. a checkerboard contains many variations but there is a simple structure to them. imagine what happens when the number of training examples is substantially smaller than the number of black and white squares on the checkerboard. based on only local generalization and the smoothness or local constancy prior, we would be guaranteed to correctly guess the color of a new point if it lies within the same checkerboard square as a training example. there is no guarantee that the learner could correctly extend the checkerboard pattern to points lying in squares that do not contain training examples. with this prior alone, the only information that an example tells us is the color of its square, and the only way to get the colors of the 158
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
173
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics figure 5. 10 : illustration of how the nearest neighbor algorithm breaks up the input space into regions. an example ( represented here by a circle ) within each region defines the region boundary ( represented here by the lines ). they value associated with each example defines what the output should be for all points within the corresponding region. the regions defined by nearest neighbor matching form a geometric pattern called a voronoi diagram. the number of these contiguous regions cannot grow faster than the number of training examples. while this figure illustrates the behavior of the nearest neighbor algorithm specifically, other machine learning algorithms that rely exclusively on the local smoothness prior for generalization exhibit similar behaviors : each training example only informs the learner about how to generalize in some neighborhood immediately surrounding that example. 159
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
174
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics entire checkerboard right is to cover each of its cells with at least one example. the smoothness assumption and the associated non - parametric learning algo - rithms work extremely well so long as there are enough examples for the learning algorithm to observe high points on most peaks and low points on most valleys of the true underlying function to be learned. this is generally true when the function to be learned is smooth enough and varies in few enough dimensions. in high dimensions, even a very smooth function can change smoothly but in a [UNK] way along each dimension. if the function additionally behaves [UNK] in [UNK] regions, it can become extremely complicated to describe with a set of training examples. if the function is complicated ( we want to distinguish a huge number of regions compared to the number of examples ), is there any hope to generalize well? the answer to both of these questions — whether it is possible to represent a complicated function [UNK], and whether it is possible for the estimated function to generalize well to new inputs — is yes. the key insight is that a very large number of regions, e. g., o ( 2k ), can be defined with o ( k ) examples, so long as we introduce some
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
175
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
##ize well to new inputs — is yes. the key insight is that a very large number of regions, e. g., o ( 2k ), can be defined with o ( k ) examples, so long as we introduce some dependencies between the regions via additional assumptions about the underlying data generating distribution. in this way, we can actually generalize non - locally (, ;, ). many bengio and monperrus 2005 bengio et al. 2006c [UNK] deep learning algorithms provide implicit or explicit assumptions that are reasonable for a broad range of ai tasks in order to capture these advantages. other approaches to machine learning often make stronger, task - specific as - sumptions. for example, we could easily solve the checkerboard task by providing the assumption that the target function is periodic. usually we do not include such strong, task - specific assumptions into neural networks so that they can generalize to a much wider variety of structures. ai tasks have structure that is much too complex to be limited to simple, manually specified properties such as periodicity, so we want learning algorithms that embody more general - purpose assumptions. the core idea in deep learning is that we assume that the data was generated by
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
175
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
much too complex to be limited to simple, manually specified properties such as periodicity, so we want learning algorithms that embody more general - purpose assumptions. the core idea in deep learning is that we assume that the data was generated by the composition of factors or features, potentially at multiple levels in a hierarchy. many other similarly generic assumptions can further improve deep learning al - gorithms. these apparently mild assumptions allow an exponential gain in the relationship between the number of examples and the number of regions that can be distinguished. these exponential gains are described more precisely in sections 6. 4. 1 15. 4 15. 5, and. the exponential advantages conferred by the use of deep, distributed representations counter the exponential challenges posed by the curse of dimensionality. 160
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
175
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics 5. 11. 3 manifold learning an important concept underlying many ideas in machine learning is that of a manifold. a manifold is a connected region. mathematically, it is a set of points, associated with a neighborhood around each point. from any given point, the manifold locally appears to be a euclidean space. in everyday life, we experience the surface of the world as a 2 - d plane, but it is in fact a spherical manifold in 3 - d space. the definition of a neighborhood surrounding each point implies the existence of transformations that can be applied to move on the manifold from one position to a neighboring one. in the example of the world ’ s surface as a manifold, one can walk north, south, east, or west. although there is a formal mathematical meaning to the term “ manifold, ” in machine learning it tends to be used more loosely to designate a connected set of points that can be approximated well by considering only a small number of degrees of freedom, or dimensions, embedded in a higher - dimensional space. each dimension corresponds to a local direction of variation. see figure for an 5. 11 example of training data lying near a one - dimensional manifold embedded in two - dimensional space. in the
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
176
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
or dimensions, embedded in a higher - dimensional space. each dimension corresponds to a local direction of variation. see figure for an 5. 11 example of training data lying near a one - dimensional manifold embedded in two - dimensional space. in the context of machine learning, we allow the dimensionality of the manifold to vary from one point to another. this often happens when a manifold intersects itself. for example, a figure eight is a manifold that has a single dimension in most places but two dimensions at the intersection at the center. 0 5 1 0 1 5 2 0 2 5 3 0 3 5 4 0........ −1 0. −0 5. 0 0. 0 5. 1 0. 1 5. 2 0. 2 5. figure 5. 11 : data sampled from a distribution in a two - dimensional space that is actually concentrated near a one - dimensional manifold, like a twisted string. the solid line indicates the underlying manifold that the learner should infer. 161
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
176
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics many machine learning problems seem hopeless if we expect the machine learning algorithm to learn functions with interesting variations across all of rn. manifold learning algorithms surmount this obstacle by assuming that most of rn consists of invalid inputs, and that interesting inputs occur only along a collection of manifolds containing a small subset of points, with interesting variations in the output of the learned function occurring only along directions that lie on the manifold, or with interesting variations happening only when we move from one manifold to another. manifold learning was introduced in the case of continuous - valued data and the unsupervised learning setting, although this probability concentration idea can be generalized to both discrete data and the supervised learning setting : the key assumption remains that probability mass is highly concentrated. the assumption that the data lies along a low - dimensional manifold may not always be correct or useful. we argue that in the context of ai tasks, such as those that involve processing images, sounds, or text, the manifold assumption is at least approximately correct. the evidence in favor of this assumption consists of two categories of observations. the first observation in favor of the manifold hypothesis is that the proba - bility distribution over images, text strings, and sounds that occur in real life is highly concentrated. uniform
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
177
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
evidence in favor of this assumption consists of two categories of observations. the first observation in favor of the manifold hypothesis is that the proba - bility distribution over images, text strings, and sounds that occur in real life is highly concentrated. uniform noise essentially never resembles structured inputs from these domains. figure shows how, instead, uniformly sampled points 5. 12 look like the patterns of static that appear on analog television sets when no signal is available. similarly, if you generate a document by picking letters uniformly at random, what is the probability that you will get a meaningful english - language text? almost zero, again, because most of the long sequences of letters do not correspond to a natural language sequence : the distribution of natural language sequences occupies a very small volume in the total space of sequences of letters. 162
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
177
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics figure 5. 12 : sampling images uniformly at random ( by randomly picking each pixel according to a uniform distribution ) gives rise to noisy images. although there is a non - zero probability to generate an image of a face or any other object frequently encountered in ai applications, we never actually observe this happening in practice. this suggests that the images encountered in ai applications occupy a negligible proportion of the volume of image space. of course, concentrated probability distributions are not [UNK] to show that the data lies on a reasonably small number of manifolds. we must also establish that the examples we encounter are connected to each other by other 163
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
178
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics examples, with each example surrounded by other highly similar examples that may be reached by applying transformations to traverse the manifold. the second argument in favor of the manifold hypothesis is that we can also imagine such neighborhoods and transformations, at least informally. in the case of images, we can certainly think of many possible transformations that allow us to trace out a manifold in image space : we can gradually dim or brighten the lights, gradually move or rotate objects in the image, gradually alter the colors on the surfaces of objects, etc. it remains likely that there are multiple manifolds involved in most applications. for example, the manifold of images of human faces may not be connected to the manifold of images of cat faces. these thought experiments supporting the manifold hypotheses convey some in - tuitive reasons supporting it. more rigorous experiments ( cayton 2005 narayanan, ; and mitter 2010 scholkopf 1998 roweis and saul 2000 tenenbaum, ; et al., ;, ; et al., 2000 brand 2003 belkin and niyogi 2003 donoho and grimes 2003 weinberger ;, ;, ;, ; and saul 2004, ) clearly support the hypothesis for a large class of datasets of interest in ai.
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
179
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
al., 2000 brand 2003 belkin and niyogi 2003 donoho and grimes 2003 weinberger ;, ;, ;, ; and saul 2004, ) clearly support the hypothesis for a large class of datasets of interest in ai. when the data lies on a low - dimensional manifold, it can be most natural for machine learning algorithms to represent the data in terms of coordinates on the manifold, rather than in terms of coordinates in rn. in everyday life, we can think of roads as 1 - d manifolds embedded in 3 - d space. we give directions to specific addresses in terms of address numbers along these 1 - d roads, not in terms of coordinates in 3 - d space. extracting these manifold coordinates is challenging, but holds the promise to improve many machine learning algorithms. this general principle is applied in many contexts. figure shows the manifold structure of 5. 13 a dataset consisting of faces. by the end of this book, we will have developed the methods necessary to learn such a manifold structure. in figure, we will see 20. 6 how a machine learning algorithm can successfully accomplish this goal. this concludes part, which has provided the basic concepts in mathematics i and machine learning which are employed throughout the remaining parts of
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
179
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
a manifold structure. in figure, we will see 20. 6 how a machine learning algorithm can successfully accomplish this goal. this concludes part, which has provided the basic concepts in mathematics i and machine learning which are employed throughout the remaining parts of the book. you are now prepared to embark upon your study of deep learning. 164
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
179
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics figure 5. 13 : training examples from the qmul multiview face dataset (, ) gong et al. 2000 for which the subjects were asked to move in such a way as to cover the two - dimensional manifold corresponding to two angles of rotation. we would like learning algorithms to be able to discover and disentangle such manifold coordinates. figure illustrates such a 20. 6 feat. 165
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
180
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
part ii deep networks : modern practices 166
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
181
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
this part of the book summarizes the state of modern deep learning as it is used to solve practical applications. deep learning has a long history and many aspirations. several approaches have been proposed that have yet to entirely bear fruit. several ambitious goals have yet to be realized. these less - developed branches of deep learning appear in the final part of the book. this part focuses only on those approaches that are essentially working tech - nologies that are already used heavily in industry. modern deep learning provides a very powerful framework for supervised learning. by adding more layers and more units within a layer, a deep network can represent functions of increasing complexity. most tasks that consist of mapping an input vector to an output vector, and that are easy for a person to do rapidly, can be accomplished via deep learning, given [UNK] large models and [UNK] large datasets of labeled training examples. other tasks, that can not be described as associating one vector to another, or that are [UNK] enough that a person would require time to think and reflect in order to accomplish the task, remain beyond the scope of deep learning for now. this part of the book describes the core parametric function approximation technology that is behind nearly all modern practical applications of deep learning. we begin by describing
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
182
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
and reflect in order to accomplish the task, remain beyond the scope of deep learning for now. this part of the book describes the core parametric function approximation technology that is behind nearly all modern practical applications of deep learning. we begin by describing the feedforward deep network model that is used to represent these functions. next, we present advanced techniques for regularization and optimization of such models. scaling these models to large inputs such as high resolution images or long temporal sequences requires specialization. we introduce the convolutional network for scaling to large images and the recurrent neural network for processing temporal sequences. finally, we present general guidelines for the practical methodology involved in designing, building, and configuring an application involving deep learning, and review some of the applications of deep learning. these chapters are the most important for a practitioner — someone who wants to begin implementing and using deep learning algorithms to solve real - world problems today. 167
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
182
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 6 deep feedforward networks deep feedforward networks, also often called feedforward neural networks, or multilayer perceptrons ( mlps ), are the quintessential deep learning models. the goal of a feedforward network is to approximate some functionf ∗. for example, for a classifier, y = f ∗ ( x ) maps an input x to a category y. a feedforward network defines a mapping y = f ( x ; θ ) and learns the value of the parameters θ that result in the best function approximation. these models are called feedforward because information flows through the function being evaluated from x, through the intermediate computations used to define f, and finally to the output y. there are no feedback connections in which outputs of the model are fed back into itself. when feedforward neural networks are extended to include feedback connections, they are called recurrent neural networks, presented in chapter. 10 feedforward networks are of extreme importance to machine learning practi - tioners. they form the basis of many important commercial applications. for example, the convolutional networks used for object recognition from photos are a specialized kind of feedforward network
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
183
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
are of extreme importance to machine learning practi - tioners. they form the basis of many important commercial applications. for example, the convolutional networks used for object recognition from photos are a specialized kind of feedforward network. feedforward networks are a conceptual stepping stone on the path to recurrent networks, which power many natural language applications. feedforward neural networks are called networks because they are typically represented by composing together many [UNK] functions. the model is asso - ciated with a directed acyclic graph describing how the functions are composed together. for example, we might have three functions f ( 1 ), f ( 2 ), and f ( 3 ) connected in a chain, to form f ( x ) = f ( 3 ) ( f ( 2 ) ( f ( 1 ) ( x ) ) ). these chain structures are the most commonly used structures of neural networks. in this case, f ( 1 ) is called the first layer of the network, f ( 2 ) is called the second layer, and so on. the overall 168
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
183
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 6. deep feedforward networks length of the chain gives the depth of the model. it is from this terminology that the name “ deep learning ” arises. the final layer of a feedforward network is called the output layer. during neural network training, we drive f ( x ) to match f∗ ( x ). the training data provides us with noisy, approximate examples of f ∗ ( x ) evaluated at [UNK] training points. each example x is accompanied by a label y f ≈ ∗ ( x ). the training examples specify directly what the output layer must do at each point x ; it must produce a value that is close to y. the behavior of the other layers is not directly specified by the training data. the learning algorithm must decide how to use those layers to produce the desired output, but the training data does not say what each individual layer should do. instead, the learning algorithm must decide how to use these layers to best implement an approximation of f∗. because the training data does not show the desired output for each of these layers, these layers are called hidden layers. finally, these networks are called neural because they are loosely inspired by neuroscience. each hidden layer of the network is typically vector - valued. the dimensionality of
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
184
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
does not show the desired output for each of these layers, these layers are called hidden layers. finally, these networks are called neural because they are loosely inspired by neuroscience. each hidden layer of the network is typically vector - valued. the dimensionality of these hidden layers determines the width of the model. each element of the vector may be interpreted as playing a role analogous to a neuron. rather than thinking of the layer as representing a single vector - to - vector function, we can also think of the layer as consisting of many units that act in parallel, each representing a vector - to - scalar function. each unit resembles a neuron in the sense that it receives input from many other units and computes its own activation value. the idea of using many layers of vector - valued representation is drawn from neuroscience. the choice of the functions f ( ) i ( x ) used to compute these representations is also loosely guided by neuroscientific observations about the functions that biological neurons compute. however, modern neural network research is guided by many mathematical and engineering disciplines, and the goal of neural networks is not to perfectly model the brain. it is best to think of feedforward networks as function approximation machines that are designed to achieve statistical generalization, occasionally drawing
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
184
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
network research is guided by many mathematical and engineering disciplines, and the goal of neural networks is not to perfectly model the brain. it is best to think of feedforward networks as function approximation machines that are designed to achieve statistical generalization, occasionally drawing some insights from what we know about the brain, rather than as models of brain function. one way to understand feedforward networks is to begin with linear models and consider how to overcome their limitations. linear models, such as logistic regression and linear regression, are appealing because they may be fit [UNK] and reliably, either in closed form or with convex optimization. linear models also have the obvious defect that the model capacity is limited to linear functions, so the model cannot understand the interaction between any two input variables. to extend linear models to represent nonlinear functions of x, we can apply the linear model not to x itself but to a transformed input φ ( x ), where φ is a 169
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
184
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 6. deep feedforward networks nonlinear transformation. equivalently, we can apply the kernel trick described in section, to obtain a nonlinear learning algorithm based on implicitly applying 5. 7. 2 the φ mapping. we can think of φ as providing a set of features describing x, or as providing a new representation for. x the question is then how to choose the mapping. φ 1. one option is to use a very generic φ, such as the infinite - dimensional φ that is implicitly used by kernel machines based on the rbf kernel. if φ ( x ) is of high enough dimension, we can always have enough capacity to fit the training set, but generalization to the test set often remains poor. very generic feature mappings are usually based only on the principle of local smoothness and do not encode enough prior information to solve advanced problems. 2. another option is to manually engineer φ. until the advent of deep learning, this was the dominant approach. this approach requires decades of human [UNK] for each separate task, with practitioners specializing in [UNK] domains such as speech recognition or computer vision, and with little transfer between domains. 3. the strategy of deep learning is to learn φ. in this approach, we have a model y
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
185
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
of human [UNK] for each separate task, with practitioners specializing in [UNK] domains such as speech recognition or computer vision, and with little transfer between domains. 3. the strategy of deep learning is to learn φ. in this approach, we have a model y = f ( x ; θ w, ) = φ ( x ; θ ) w. we now have parameters θ that we use to learn φ from a broad class of functions, and parameters w that map from φ ( x ) to the desired output. this is an example of a deep feedforward network, with φ defining a hidden layer. this approach is the only one of the three that gives up on the convexity of the training problem, but the benefits outweigh the harms. in this approach, we parametrize the representation as φ ( x ; θ ) and use the optimization algorithm to find the θ that corresponds to a good representation. if we wish, this approach can capture the benefit of the first approach by being highly generic — we do so by using a very broad family φ ( x ; θ ). this approach can also capture the benefit of the second approach. human practitioners can encode their knowledge to help generalization by
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
185
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
##rst approach by being highly generic — we do so by using a very broad family φ ( x ; θ ). this approach can also capture the benefit of the second approach. human practitioners can encode their knowledge to help generalization by designing families φ ( x ; θ ) that they expect will perform well. the advantage is that the human designer only needs to find the right general function family rather than finding precisely the right function. this general principle of improving models by learning features extends beyond the feedforward networks described in this chapter. it is a recurring theme of deep learning that applies to all of the kinds of models described throughout this book. feedforward networks are the application of this principle to learning deterministic 170
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
185
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 6. deep feedforward networks mappings from x to y that lack feedback connections. other models presented later will apply these principles to learning stochastic mappings, learning functions with feedback, and learning probability distributions over a single vector. we begin this chapter with a simple example of a feedforward network. next, we address each of the design decisions needed to deploy a feedforward network. first, training a feedforward network requires making many of the same design decisions as are necessary for a linear model : choosing the optimizer, the cost function, and the form of the output units. we review these basics of gradient - based learning, then proceed to confront some of the design decisions that are unique to feedforward networks. feedforward networks have introduced the concept of a hidden layer, and this requires us to choose the activation functions that will be used to compute the hidden layer values. we must also design the architecture of the network, including how many layers the network should contain, how these layers should be connected to each other, and how many units should be in each layer. learning in deep neural networks requires computing the gradients of complicated functions. we present the back - propagation algorithm and its modern generalizations, which can be used to
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
186
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
layers should be connected to each other, and how many units should be in each layer. learning in deep neural networks requires computing the gradients of complicated functions. we present the back - propagation algorithm and its modern generalizations, which can be used to [UNK] compute these gradients. finally, we close with some historical perspective. 6. 1 example : learning xor to make the idea of a feedforward network more concrete, we begin with an example of a fully functioning feedforward network on a very simple task : learning the xor function. the xor function ( “ exclusive or ” ) is an operation on two binary values, x1 and x2. when exactly one of these binary values is equal to, the xor function 1 returns. otherwise, it returns 0. the xor function provides the target function 1 y = f∗ ( x ) that we want to learn. our model provides a function y = f ( x ; θ ) and our learning algorithm will adapt the parameters θ to make f as similar as possible to f∗. in this simple example, we will not be concerned with statistical generalization. we want our network to perform correctly on the four points x = { [ 0, 0 ], [ 0, 1 ], [
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
186
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
as similar as possible to f∗. in this simple example, we will not be concerned with statistical generalization. we want our network to perform correctly on the four points x = { [ 0, 0 ], [ 0, 1 ], [ 1, 0 ], and [ 1, 1 ] }. we will train the network on all four of these points. the only challenge is to fit the training set. we can treat this problem as a regression problem and use a mean squared error loss function. we choose this loss function to simplify the math for this example as much as possible. in practical applications, mse is usually not an 171
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
186
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 6. deep feedforward networks appropriate cost function for modeling binary data. more appropriate approaches are described in section. 6. 2. 2. 2 evaluated on our whole training set, the mse loss function is j ( ) = θ 1 4 x∈x ( f∗ ( ) ( ; ) ) x −f x θ 2. ( 6. 1 ) now we must choose the form of our model, f ( x ; θ ). suppose that we choose a linear model, with consisting of and. our model is defined to be θ w b f, b ( ; x w ) = xw + b. ( 6. 2 ) we can minimize j ( θ ) in closed form with respect to w and b using the normal equations. after solving the normal equations, we obtain w = 0 and b = 1 2. the linear model simply outputs 0. 5 everywhere. why does this happen? figure shows 6. 1 how a linear model is not able to represent the xor function. one way to solve this problem is to use a model that learns a [UNK] feature space in which a linear model is able to represent the solution. specifically, we will introduce a very simple feedforward network with one hidden layer
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
187
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
. one way to solve this problem is to use a model that learns a [UNK] feature space in which a linear model is able to represent the solution. specifically, we will introduce a very simple feedforward network with one hidden layer containing two hidden units. see figure for an illustration of 6. 2 this model. this feedforward network has a vector of hidden units h that are computed by a function f ( 1 ) ( x ; w c, ). the values of these hidden units are then used as the input for a second layer. the second layer is the output layer of the network. the output layer is still just a linear regression model, but now it is applied to h rather than to x. the network now contains two functions chained together : h = f ( 1 ) ( x ; w c, ) and y = f ( 2 ) ( h ; w, b ), with the complete model being f,,, b f ( ; x w c w ) = ( 2 ) ( f ( 1 ) ( ) ) x. what function should f ( 1 ) compute? linear models have served us well so far, and it may be tempting to make f ( 1 ) be linear as well. unfortunately, if
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
187
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
( 2 ) ( f ( 1 ) ( ) ) x. what function should f ( 1 ) compute? linear models have served us well so far, and it may be tempting to make f ( 1 ) be linear as well. unfortunately, if f ( 1 ) were linear, then the feedforward network as a whole would remain a linear function of its input. ignoring the intercept terms for the moment, suppose f ( 1 ) ( x ) = wx and f ( 2 ) ( h ) = hw. then f ( x ) = ww x. we could represent this function as f ( ) = x xwwhere w = w w. clearly, we must use a nonlinear function to describe the features. most neural networks do so using an [UNK] transformation controlled by learned parameters, followed by a fixed, nonlinear function called an activation function. we use that strategy here, by defining h = g ( w x + c ), where w provides the weights of a linear transformation and c the biases. previously, to describe a linear regression 172
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
187
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 6. deep feedforward networks 0 1 x1 0 1 x2 original space x 0 1 2 h1 0 1 h2 learned space h figure 6. 1 : solving the xor problem by learning a representation. the bold numbers printed on the plot indicate the value that the learned function must output at each point. ( left ) a linear model applied directly to the original input cannot implement the xor function. when x1 = 0, the model ’ s output must increase as x2 increases. when x1 = 1, the model ’ s output must decrease as x2 increases. a linear model must apply a fixed [UNK] w2 to x2. the linear model therefore cannot use the value of x1 to change the [UNK] on x2 and cannot solve this problem. ( right ) in the transformed space represented by the features extracted by a neural network, a linear model can now solve the problem. in our example solution, the two points that must have output have been 1 collapsed into a single point in feature space. in other words, the nonlinear features have mapped both x = [ 1, 0 ] and x = [ 0, 1 ] to a single point in feature space, h = [ 1, 0 ]. the linear model can now
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
188
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
in feature space. in other words, the nonlinear features have mapped both x = [ 1, 0 ] and x = [ 0, 1 ] to a single point in feature space, h = [ 1, 0 ]. the linear model can now describe the function as increasing in h1 and decreasing in h2. in this example, the motivation for learning the feature space is only to make the model capacity greater so that it can fit the training set. in more realistic applications, learned representations can also help the model to generalize. 173
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
188
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 6. deep feedforward networks y h x w w y h1 h1 x1 x1 h2 h2 x2 x2 figure 6. 2 : an example of a feedforward network, drawn in two [UNK] styles. specifically, this is the feedforward network we use to solve the xor example. it has a single hidden layer containing two units. ( left ) in this style, we draw every unit as a node in the graph. this style is very explicit and unambiguous but for networks larger than this example it can consume too much space. in this style, we draw a node in the graph for ( right ) each entire vector representing a layer ’ s activations. this style is much more compact. sometimes we annotate the edges in this graph with the name of the parameters that describe the relationship between two layers. here, we indicate that a matrixw describes the mapping from x to h, and a vector w describes the mapping from h to y. we typically omit the intercept parameters associated with each layer when labeling this kind of drawing. model, we used a vector of weights and a scalar bias parameter to describe an [UNK] transformation from an input vector to an output scalar.
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
189
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
to y. we typically omit the intercept parameters associated with each layer when labeling this kind of drawing. model, we used a vector of weights and a scalar bias parameter to describe an [UNK] transformation from an input vector to an output scalar. now, we describe an [UNK] transformation from a vector x to a vector h, so an entire vector of bias parameters is needed. the activation function g is typically chosen to be a function that is applied element - wise, with hi = g ( xw :, i + c i ). in modern neural networks, the default recommendation is to use the rectified linear unit or relu ( jarrett et al. et al., ;, ; 2009 nair and hinton 2010 glorot, ) defined by the activation 2011a function depicted in figure. g z, z ( ) = max 0 { } 6. 3 we can now specify our complete network as f,,, b ( ; x w c w ) = wmax 0 {, w x c + } + b. ( 6. 3 ) we can now specify a solution to the xor problem. let w = 1 1 1 1, ( 6. 4 ) c = 0 −1, ( 6. 5
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
189
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
{, w x c + } + b. ( 6. 3 ) we can now specify a solution to the xor problem. let w = 1 1 1 1, ( 6. 4 ) c = 0 −1, ( 6. 5 ) 174
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
189
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 6. deep feedforward networks 0 z 0 g z ( ) = max 0 {, z } figure 6. 3 : the rectified linear activation function. this activation function is the default activation function recommended for use with most feedforward neural networks. applying this function to the output of a linear transformation yields a nonlinear transformation. however, the function remains very close to linear, in the sense that is a piecewise linear function with two linear pieces. because rectified linear units are nearly linear, they preserve many of the properties that make linear models easy to optimize with gradient - based methods. they also preserve many of the properties that make linear models generalize well. a common principle throughout computer science is that we can build complicated systems from minimal components. much as a turing machine ’ s memory needs only to be able to store 0 or 1 states, we can build a universal function approximator from rectified linear functions. 175
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
190
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 6. deep feedforward networks w = 1 −2, ( 6. 6 ) and. b = 0 we can now walk through the way that the model processes a batch of inputs. let x be the design matrix containing all four points in the binary input space, with one example per row : x = 0 0 0 1 1 0 1 1. ( 6. 7 ) the first step in the neural network is to multiply the input matrix by the first layer ’ s weight matrix : xw = 0 0 1 1 1 1 2 2. ( 6. 8 ) next, we add the bias vector, to obtain c 0 1 − 1 0 1 0 2 1. ( 6. 9 ) in this space, all of the examples lie along a line with slope. as we move along 1 this line, the output needs to begin at, then rise to, then drop back down to. 0 1 0 a linear model cannot implement such a function. to finish computing the value of for each example, we apply the rectified linear transformation : h 0 0 1 0 1 0 2 1. ( 6. 10 ) this transformation has changed the relationship between the examples. they no longer lie on a single line. as shown in
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
191
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
each example, we apply the rectified linear transformation : h 0 0 1 0 1 0 2 1. ( 6. 10 ) this transformation has changed the relationship between the examples. they no longer lie on a single line. as shown in figure, they now lie in a space where a 6. 1 linear model can solve the problem. we finish by multiplying by the weight vector : w 0 1 1 0. ( 6. 11 ) 176
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
191
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 6. deep feedforward networks the neural network has obtained the correct answer for every example in the batch. in this example, we simply specified the solution, then showed that it obtained zero error. in a real situation, there might be billions of model parameters and billions of training examples, so one cannot simply guess the solution as we did here. instead, a gradient - based optimization algorithm can find parameters that produce very little error. the solution we described to the xor problem is at a global minimum of the loss function, so gradient descent could converge to this point. there are other equivalent solutions to the xor problem that gradient descent could also find. the convergence point of gradient descent depends on the initial values of the parameters. in practice, gradient descent would usually not find clean, easily understood, integer - valued solutions like the one we presented here. 6. 2 gradient - based learning designing and training a neural network is not much [UNK] from training any other machine learning model with gradient descent. in section, we described 5. 10 how to build a machine learning algorithm by specifying an optimization procedure, a cost function, and a model family. the largest [UNK] between the linear models we have seen so far and neural networks is that the
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
192
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
. in section, we described 5. 10 how to build a machine learning algorithm by specifying an optimization procedure, a cost function, and a model family. the largest [UNK] between the linear models we have seen so far and neural networks is that the nonlinearity of a neural network causes most interesting loss functions to become non - convex. this means that neural networks are usually trained by using iterative, gradient - based optimizers that merely drive the cost function to a very low value, rather than the linear equation solvers used to train linear regression models or the convex optimization algorithms with global conver - gence guarantees used to train logistic regression or svms. convex optimization converges starting from any initial parameters ( in theory — in practice it is very robust but can encounter numerical problems ). stochastic gradient descent applied to non - convex loss functions has no such convergence guarantee, and is sensitive to the values of the initial parameters. for feedforward neural networks, it is important to initialize all weights to small random values. the biases may be initialized to zero or to small positive values. the iterative gradient - based opti - mization algorithms used to train feedforward networks and almost all other deep models will be described in detail in chapter
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
192
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
random values. the biases may be initialized to zero or to small positive values. the iterative gradient - based opti - mization algorithms used to train feedforward networks and almost all other deep models will be described in detail in chapter, with parameter initialization in 8 particular discussed in section. for the moment, it [UNK] to understand that 8. 4 the training algorithm is almost always based on using the gradient to descend the cost function in one way or another. the specific algorithms are improvements and refinements on the ideas of gradient descent, introduced in section, and, 4. 3 177
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
192
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 6. deep feedforward networks more specifically, are most often improvements of the stochastic gradient descent algorithm, introduced in section. 5. 9 we can of course, train models such as linear regression and support vector machines with gradient descent too, and in fact this is common when the training set is extremely large. from this point of view, training a neural network is not much [UNK] from training any other model. computing the gradient is slightly more complicated for a neural network, but can still be done [UNK] and exactly. section will describe how to obtain the gradient using the back - propagation 6. 5 algorithm and modern generalizations of the back - propagation algorithm. as with other machine learning models, to apply gradient - based learning we must choose a cost function, and we must choose how to represent the output of the model. we now revisit these design considerations with special emphasis on the neural networks scenario. 6. 2. 1 cost functions an important aspect of the design of a deep neural network is the choice of the cost function. fortunately, the cost functions for neural networks are more or less the same as those for other parametric models, such as linear models. in most cases, our parametric model defines a distribution p (
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
193
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
is the choice of the cost function. fortunately, the cost functions for neural networks are more or less the same as those for other parametric models, such as linear models. in most cases, our parametric model defines a distribution p ( y x | ; θ ) and we simply use the principle of maximum likelihood. this means we use the cross - entropy between the training data and the model ’ s predictions as the cost function. sometimes, we take a simpler approach, where rather than predicting a complete probability distribution over y, we merely predict some statistic of y conditioned on. specialized loss functions allow us to train a predictor of these estimates. x the total cost function used to train a neural network will often combine one of the primary cost functions described here with a regularization term. we have already seen some simple examples of regularization applied to linear models in section. the weight decay approach used for linear models is also directly 5. 2. 2 applicable to deep neural networks and is among the most popular regularization strategies. more advanced regularization strategies for neural networks will be described in chapter. 7 6. 2. 1. 1 learning conditional distributions with maximum likelihood most modern neural networks are trained using maximum likelihood. this means that the cost function is simply the
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
193
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
##ization strategies. more advanced regularization strategies for neural networks will be described in chapter. 7 6. 2. 1. 1 learning conditional distributions with maximum likelihood most modern neural networks are trained using maximum likelihood. this means that the cost function is simply the negative log - likelihood, equivalently described 178
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
193
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 6. deep feedforward networks as the cross - entropy between the training data and the model distribution. this cost function is given by j ( ) = θ −ex y, [UNK] log pmodel ( ) y x |. ( 6. 12 ) the specific form of the cost function changes from model to model, depending on the specific form of log pmodel. the expansion of the above equation typically yields some terms that do not depend on the model parameters and may be dis - carded. for example, as we saw in section, if 5. 5. 1 pmodel ( y x | ) = n ( y ; f ( x ; θ ), i ), then we recover the mean squared error cost, j θ ( ) = 1 2ex y, [UNK] | | − | | y f ( ; ) x θ 2 + const, ( 6. 13 ) up to a scaling factor of 1 2 and a term that does not depend on. the discarded θ constant is based on the variance of the gaussian distribution, which in this case we chose not to parametrize. previously, we saw that the equivalence between maximum likelihood estimation with an output distribution and minimization of mean
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
194
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
. the discarded θ constant is based on the variance of the gaussian distribution, which in this case we chose not to parametrize. previously, we saw that the equivalence between maximum likelihood estimation with an output distribution and minimization of mean squared error holds for a linear model, but in fact, the equivalence holds regardless of the used to predict the mean of the gaussian. f ( ; ) x θ an advantage of this approach of deriving the cost function from maximum likelihood is that it removes the burden of designing cost functions for each model. specifying a model p ( y x | ) automatically determines a cost function log p ( y x | ). one recurring theme throughout neural network design is that the gradient of the cost function must be large and predictable enough to serve as a good guide for the learning algorithm. functions that saturate ( become very flat ) undermine this objective because they make the gradient become very small. in many cases this happens because the activation functions used to produce the output of the hidden units or the output units saturate. the negative log - likelihood helps to avoid this problem for many models. many output units involve an exp function that can saturate when its argument is very negative. the log function in the
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
194
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
the hidden units or the output units saturate. the negative log - likelihood helps to avoid this problem for many models. many output units involve an exp function that can saturate when its argument is very negative. the log function in the negative log - likelihood cost function undoes the exp of some output units. we will discuss the interaction between the cost function and the choice of output unit in section. 6. 2. 2 one unusual property of the cross - entropy cost used to perform maximum likelihood estimation is that it usually does not have a minimum value when applied to the models commonly used in practice. for discrete output variables, most models are parametrized in such a way that they cannot represent a probability of zero or one, but can come arbitrarily close to doing so. logistic regression is an example of such a model. for real - valued output variables, if the model 179
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
194
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 6. deep feedforward networks can control the density of the output distribution ( for example, by learning the variance parameter of a gaussian output distribution ) then it becomes possible to assign extremely high density to the correct training set outputs, resulting in cross - entropy approaching negative infinity. regularization techniques described in chapter provide several [UNK] ways of modifying the learning problem so 7 that the model cannot reap unlimited reward in this way. 6. 2. 1. 2 learning conditional statistics instead of learning a full probability distribution p ( y x | ; θ ) we often want to learn just one conditional statistic of given. y x for example, we may have a predictor f ( x ; θ ) that we wish to predict the mean of. y if we use a [UNK] powerful neural network, we can think of the neural network as being able to represent any function f from a wide class of functions, with this class being limited only by features such as continuity and boundedness rather than by having a specific parametric form. from this point of view, we can view the cost function as being a functional rather than just a function. a functional is a mapping from functions to real numbers. we can thus think of learning as choosing a function rather
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
195
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
##fic parametric form. from this point of view, we can view the cost function as being a functional rather than just a function. a functional is a mapping from functions to real numbers. we can thus think of learning as choosing a function rather than merely choosing a set of parameters. we can design our cost functional to have its minimum occur at some specific function we desire. for example, we can design the cost functional to have its minimum lie on the function that maps x to the expected value of y given x. solving an optimization problem with respect to a function requires a mathematical tool called calculus of variations, described in section. it is not necessary 19. 4. 2 to understand calculus of variations to understand the content of this chapter. at the moment, it is only necessary to understand that calculus of variations may be used to derive the following two results. our first result derived using calculus of variations is that solving the optimiza - tion problem f ∗ = arg min f ex y, [UNK] | | − | | y f ( ) x 2 ( 6. 14 ) yields f∗ ( ) = x [UNK] ( ) y x | [ ] y, ( 6. 15 ) so long as this function lies within the class we opt
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
195
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
[UNK] | | − | | y f ( ) x 2 ( 6. 14 ) yields f∗ ( ) = x [UNK] ( ) y x | [ ] y, ( 6. 15 ) so long as this function lies within the class we optimize over. in other words, if we could train on infinitely many samples from the true data generating distribution, minimizing the mean squared error cost function gives a function that predicts the mean of for each value of. y x 180
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
195
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 6. deep feedforward networks [UNK] cost functions give [UNK] statistics. a second result derived using calculus of variations is that f ∗ = arg min f ex y, [UNK] | | − | | y f ( ) x 1 ( 6. 16 ) yields a function that predicts the median value of y for each x, so long as such a function may be described by the family of functions we optimize over. this cost function is commonly called. mean absolute error unfortunately, mean squared error and mean absolute error often lead to poor results when used with gradient - based optimization. some output units that saturate produce very small gradients when combined with these cost functions. this is one reason that the cross - entropy cost function is more popular than mean squared error or mean absolute error, even when it is not necessary to estimate an entire distribution. p ( ) y x | 6. 2. 2 output units the choice of cost function is tightly coupled with the choice of output unit. most of the time, we simply use the cross - entropy between the data distribution and the model distribution. the choice of how to represent the output then determines the form of the cross - entropy function. any kind of neural network unit that may be used as an output can also
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
196
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
we simply use the cross - entropy between the data distribution and the model distribution. the choice of how to represent the output then determines the form of the cross - entropy function. any kind of neural network unit that may be used as an output can also be used as a hidden unit. here, we focus on the use of these units as outputs of the model, but in principle they can be used internally as well. we revisit these units with additional detail about their use as hidden units in section. 6. 3 throughout this section, we suppose that the feedforward network provides a set of hidden features defined by h = f ( x ; θ ). the role of the output layer is then to provide some additional transformation from the features to complete the task that the network must perform. 6. 2. 2. 1 linear units for gaussian output distributions one simple kind of output unit is an output unit based on an [UNK] transformation with no nonlinearity. these are often just called linear units. given features h, a layer of linear output units produces a vector [UNK] = wh + b. linear output layers are often used to produce the mean of a conditional gaussian distribution : p ( ) = ( ; y x | n y [UNK] i
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
196
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
features h, a layer of linear output units produces a vector [UNK] = wh + b. linear output layers are often used to produce the mean of a conditional gaussian distribution : p ( ) = ( ; y x | n y [UNK] i, ). ( 6. 17 ) 181
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
196
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 6. deep feedforward networks maximizing the log - likelihood is then equivalent to minimizing the mean squared error. the maximum likelihood framework makes it straightforward to learn the covariance of the gaussian too, or to make the covariance of the gaussian be a function of the input. however, the covariance must be constrained to be a positive definite matrix for all inputs. it is [UNK] to satisfy such constraints with a linear output layer, so typically other output units are used to parametrize the covariance. approaches to modeling the covariance are described shortly, in section. 6. 2. 2. 4 because linear units do not saturate, they pose little [UNK] for gradient - based optimization algorithms and may be used with a wide variety of optimization algorithms. 6. 2. 2. 2 sigmoid units for bernoulli output distributions many tasks require predicting the value of a binary variable y. classification problems with two classes can be cast in this form. the maximum - likelihood approach is to define a bernoulli distribution over y conditioned on. x a bernoulli distribution is defined by just a single number. the neural net needs to predict only p
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
197
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
be cast in this form. the maximum - likelihood approach is to define a bernoulli distribution over y conditioned on. x a bernoulli distribution is defined by just a single number. the neural net needs to predict only p ( y = 1 | x ). for this number to be a valid probability, it must lie in the interval [ 0, 1 ]. satisfying this constraint requires some careful design [UNK]. suppose we were to use a linear unit, and threshold its value to obtain a valid probability : p y ( = 1 ) = max | x 0 min, 1, wh + b. ( 6. 18 ) this would indeed define a valid conditional distribution, but we would not be able to train it very [UNK] with gradient descent. any time that wh + b strayed outside the unit interval, the gradient of the output of the model with respect to its parameters would be 0. a gradient of 0 is typically problematic because the learning algorithm no longer has a guide for how to improve the corresponding parameters. instead, it is better to use a [UNK] approach that ensures there is always a strong gradient whenever the model has the wrong answer. this approach is based on using sigmoid output units combined with maximum likelihood. a
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
197
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
how to improve the corresponding parameters. instead, it is better to use a [UNK] approach that ensures there is always a strong gradient whenever the model has the wrong answer. this approach is based on using sigmoid output units combined with maximum likelihood. a sigmoid output unit is defined by [UNK] σ = wh + b ( 6. 19 ) 182
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
197
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 6. deep feedforward networks where is the logistic sigmoid function described in section. σ 3. 10 we can think of the sigmoid output unit as having two components. first, it uses a linear layer to compute z = wh + b. next, it uses the sigmoid activation function to convert into a probability. z we omit the dependence on x for the moment to discuss how to define a probability distribution over y using the value z. the sigmoid can be motivated by constructing an unnormalized probability distribution [UNK] ( y ), which does not sum to 1. we can then divide by an appropriate constant to obtain a valid probability distribution. if we begin with the assumption that the unnormalized log probabilities are linear in y and z, we can exponentiate to obtain the unnormalized probabilities. we then normalize to see that this yields a bernoulli distribution controlled by a sigmoidal transformation of : z log [UNK] y yz ( ) = ( 6. 20 ) [UNK] y yz ( ) = exp ( ) ( 6. 21 ) p y ( ) = exp ( ) yz 1 y = 0 exp ( yz
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
198
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0