text
stringlengths
35
1.54k
source
stringclasses
1 value
page
int64
1
800
book
stringclasses
1 value
chunk_index
int64
0
0
distribution can be conceptualized as a multinoulli distribution, with a probability associated to each possible input value that is simply equal to the empirical frequency of that value in the training set. we can view the empirical distribution formed from a dataset of training examples as specifying the distribution that we sample from when we train a model on this dataset. another important perspective on the empirical distribution is that it is the probability density that maximizes the likelihood of the training data ( see section ). 5. 5 3. 9. 6 mixtures of distributions it is also common to define probability distributions by combining other simpler probability distributions. one common way of combining distributions is to construct a mixture distribution. a mixture distribution is made up of several component distributions. on each trial, the choice of which component distribution generates the sample is determined by sampling a component identity from a multinoulli distribution : p ( ) = x i p i p i ( = c ) ( = x c | ) ( 3. 29 ) where c is the multinoulli distribution over component identities. p ( ) we have already seen one example of a mixture distribution : the empirical distribution over real - valued variables is a mixture distribution with one dirac component for each training example. 66
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
81
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 3. probability and information theory the mixture model is one simple strategy for combining probability distributions to create a richer distribution. in chapter, we explore the art of building complex 16 probability distributions from simple ones in more detail. the mixture model allows us to briefly glimpse a concept that will be of paramount importance later — the latent variable. a latent variable is a random variable that we cannot observe directly. the component identity variable c of the mixture model provides an example. latent variables may be related to x through the joint distribution, in this case, p ( x c, ) = p ( x c | ) p ( c ). the distribution p ( c ) over the latent variable and the distribution p ( x c | ) relating the latent variables to the visible variables determines the shape of the distribution p ( x ) even though it is possible to describe p ( x ) without reference to the latent variable. latent variables are discussed further in section. 16. 5 a very powerful and common type of mixture model is the gaussian mixture model, in which the components p ( x | c = i ) are gaussians. each component has a separately parametrized mean µ ( ) i and covariance σ ( )
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
82
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
common type of mixture model is the gaussian mixture model, in which the components p ( x | c = i ) are gaussians. each component has a separately parametrized mean µ ( ) i and covariance σ ( ) i. some mixtures can have more constraints. for example, the covariances could be shared across components via the constraint σ ( ) i = σ, i [UNK]. as with a single gaussian distribution, the mixture of gaussians might constrain the covariance matrix for each component to be diagonal or isotropic. in addition to the means and covariances, the parameters of a gaussian mixture specify the prior probability αi = p ( c = i ) given to each component i. the word “ prior ” indicates that it expresses the model ’ s beliefs about c before it has observed x. by comparison, p ( c | x ) is a posterior probability, because it is computed after observation of x. a gaussian mixture model is a universal approximator of densities, in the sense that any smooth density can be approximated with any specific, non - zero amount of error by a gaussian mixture model with enough components. figure
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
82
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
##ussian mixture model is a universal approximator of densities, in the sense that any smooth density can be approximated with any specific, non - zero amount of error by a gaussian mixture model with enough components. figure shows samples from a gaussian mixture model. 3. 2 3. 10 useful properties of common functions certain functions arise often while working with probability distributions, especially the probability distributions used in deep learning models. one of these functions is the : logistic sigmoid σ x ( ) = 1 1 + exp ( ) −x. ( 3. 30 ) the logistic sigmoid is commonly used to produce the φ parameter of a bernoulli 67
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
82
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 3. probability and information theory x1 x2 figure 3. 2 : samples from a gaussian mixture model. in this example, there are three components. from left to right, the first component has an isotropic covariance matrix, meaning it has the same amount of variance in each direction. the second has a diagonal covariance matrix, meaning it can control the variance separately along each axis - aligned direction. this example has more variance along thex2 axis than along the x1 axis. the third component has a full - rank covariance matrix, allowing it to control the variance separately along an arbitrary basis of directions. distribution because its range is ( 0, 1 ), which lies within the valid range of values for the φ parameter. see figure for a graph of the sigmoid function. the 3. 3 sigmoid function saturates when its argument is very positive or very negative, meaning that the function becomes very flat and insensitive to small changes in its input. another commonly encountered function is the softplus function (, dugas et al. 2001 ) : ζ x x. ( ) = log ( 1 + exp ( ) ) ( 3. 31 ) the softplus
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
83
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
changes in its input. another commonly encountered function is the softplus function (, dugas et al. 2001 ) : ζ x x. ( ) = log ( 1 + exp ( ) ) ( 3. 31 ) the softplus function can be useful for producing the β or σ parameter of a normal distribution because its range is ( 0, ∞ ). it also arises commonly when manipulating expressions involving sigmoids. the name of the softplus function comes from the fact that it is a smoothed or “ softened ” version of x + = max ( 0 ), x. ( 3. 32 ) see figure for a graph of the softplus function. 3. 4 the following properties are all useful enough that you may wish to memorize them : 68
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
83
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 3. probability and information theory − − 10 5 0 5 10 0 0. 0 2. 0 4. 0 6. 0 8. 1 0. σ x ( ) figure 3. 3 : the logistic sigmoid function. − − 10 5 0 5 10 0 2 4 6 8 10 ζ x ( ) figure 3. 4 : the softplus function. 69
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
84
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 3. probability and information theory σ x ( ) = exp ( ) x exp ( ) + exp ( 0 ) x ( 3. 33 ) d dxσ x σ x σ x ( ) = ( ) ( 1 − ( ) ) ( 3. 34 ) 1 ( ) = ( ) −σ x σ −x ( 3. 35 ) log ( ) = ( ) σ x −ζ −x ( 3. 36 ) d dxζ x σ x ( ) = ( ) ( 3. 37 ) [UNK] ∈ x ( 0 1 ),, σ−1 ( ) = log x x 1 −x ( 3. 38 ) [UNK] >, ζ 0 −1 ( ) = log ( exp ( ) 1 ) x x − ( 3. 39 ) ζ x ( ) = x −∞ σ y dy ( ) ( 3. 40 ) ζ x ζ x x ( ) − ( − ) = ( 3. 41 ) the function σ −1 ( x ) is called the logit in statistics, but this term is more rarely used in machine learning. equation provides extra justification for the name “ softplus. ” the softplus 3. 41 function is intended as a smoothed version of the positive
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
85
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
the logit in statistics, but this term is more rarely used in machine learning. equation provides extra justification for the name “ softplus. ” the softplus 3. 41 function is intended as a smoothed version of the positive part function, x + = max { 0, x }. the positive part function is the counterpart of the negative part function, x− = max { 0, x − }. to obtain a smooth function that is analogous to the negative part, one can use ζ ( −x ). just as x can be recovered from its positive part and negative part via the identity x + −x− = x, it is also possible to recover x using the same relationship between and, as shown in equation. ζ x ( ) ζ x ( − ) 3. 41 3. 11 bayes ’ rule we often find ourselves in a situation where we know p ( y x | ) and need to know p ( x y | ). fortunately, if we also know p ( x ), we can compute the desired quantity using bayes ’ rule : p ( ) = x y | p p ( ) x ( ) y x | p ( ) y. ( 3. 42 ) note that while p ( y )
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
85
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
( x ), we can compute the desired quantity using bayes ’ rule : p ( ) = x y | p p ( ) x ( ) y x | p ( ) y. ( 3. 42 ) note that while p ( y ) appears in the formula, it is usually feasible to compute p ( ) = y x p x p x p ( y | ) ( ), so we do not need to begin with knowledge of ( ) y. 70
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
85
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 3. probability and information theory bayes ’ rule is straightforward to derive from the definition of conditional probability, but it is useful to know the name of this formula since many texts refer to it by name. it is named after the reverend thomas bayes, who first discovered a special case of the formula. the general version presented here was independently discovered by pierre - simon laplace. 3. 12 technical details of continuous variables a proper formal understanding of continuous random variables and probability density functions requires developing probability theory in terms of a branch of mathematics known as measure theory. measure theory is beyond the scope of this textbook, but we can briefly sketch some of the issues that measure theory is employed to resolve. in section, we saw that the probability of a continuous vector - valued 3. 3. 2 x lying in some set s is given by the integral of p ( x ) over the set s. some choices of set s can produce paradoxes. for example, it is possible to construct two sets s1 and s2 such that p ( x ∈s1 ) + p ( x ∈s2 ) > 1 but s1 ∩s2 = ∅. these sets are generally constructed making very heavy use of the infinite precision
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
86
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
sets s1 and s2 such that p ( x ∈s1 ) + p ( x ∈s2 ) > 1 but s1 ∩s2 = ∅. these sets are generally constructed making very heavy use of the infinite precision of real numbers, for example by making fractal - shaped sets or sets that are defined by transforming the set of rational numbers. 2 one of the key contributions of measure theory is to provide a characterization of the set of sets that we can compute the probability of without encountering paradoxes. in this book, we only integrate over sets with relatively simple descriptions, so this aspect of measure theory never becomes a relevant concern. for our purposes, measure theory is more useful for describing theorems that apply to most points in rn but do not apply to some corner cases. measure theory provides a rigorous way of describing that a set of points is negligibly small. such a set is said to have measure zero. we do not formally define this concept in this textbook. for our purposes, it is [UNK] to understand the intuition that a set of measure zero occupies no volume in the space we are measuring. for example, within r2, a line has measure zero, while a filled polygon has
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
86
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
this textbook. for our purposes, it is [UNK] to understand the intuition that a set of measure zero occupies no volume in the space we are measuring. for example, within r2, a line has measure zero, while a filled polygon has positive measure. likewise, an individual point has measure zero. any union of countably many sets that each have measure zero also has measure zero ( so the set of all the rational numbers has measure zero, for instance ). another useful term from measure theory is almost everywhere. a property that holds almost everywhere holds throughout all of space except for on a set of 2the banach - tarski theorem provides a fun example of such sets. 71
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
86
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 3. probability and information theory measure zero. because the exceptions occupy a negligible amount of space, they can be safely ignored for many applications. some important results in probability theory hold for all discrete values but only hold “ almost everywhere ” for continuous values. another technical detail of continuous variables relates to handling continuous random variables that are deterministic functions of one another. suppose we have two random variables, x and y, such that y = g ( x ), where g is an invertible, con - tinuous, [UNK] transformation. one might expect that py ( y ) = px ( g−1 ( y ) ). this is actually not the case. as a simple example, suppose we have scalar random variables x and y. suppose y = x 2 and x [UNK] ( 0, 1 ). if we use the rule py ( y ) = px ( 2y ) then py will be 0 everywhere except the interval [ 0, 1 2 ] 1, and it will be on this interval. this means py ( ) = y dy 1 2, ( 3. 43 ) which violates the definition of a probability distribution. this is a common mistake. the problem with this approach is that it
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
87
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
be on this interval. this means py ( ) = y dy 1 2, ( 3. 43 ) which violates the definition of a probability distribution. this is a common mistake. the problem with this approach is that it fails to account for the distortion of space introduced by the function g. recall that the probability of x lying in an infinitesimally small region with volume δx is given by p ( x ) δx. since g can expand or contract space, the infinitesimal volume surrounding x in x space may have [UNK] volume in space. y to see how to correct the problem, we return to the scalar case. we need to preserve the property | py ( ( ) ) = g x dy | | px ( ) x dx. | ( 3. 44 ) solving from this, we obtain py ( ) = y px ( g−1 ( ) ) y ∂x ∂y ( 3. 45 ) or equivalently px ( ) = x py ( ( ) ) g x ∂g x ( ) ∂x. ( 3. 46 ) in higher dimensions, the derivative generalizes to the determinant of the jacobian matrix — the matrix with
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
87
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
##ly px ( ) = x py ( ( ) ) g x ∂g x ( ) ∂x. ( 3. 46 ) in higher dimensions, the derivative generalizes to the determinant of the jacobian matrix — the matrix with ji, j = ∂x i ∂yj. thus, for real - valued vectors and, x y p x ( ) = x py ( ( ) ) g x det ∂g ( ) x ∂x. ( 3. 47 ) 72
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
87
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 3. probability and information theory 3. 13 information theory information theory is a branch of applied mathematics that revolves around quantifying how much information is present in a signal. it was originally invented to study sending messages from discrete alphabets over a noisy channel, such as communication via radio transmission. in this context, information theory tells how to design optimal codes and calculate the expected length of messages sampled from specific probability distributions using various encoding schemes. in the context of machine learning, we can also apply information theory to continuous variables where some of these message length interpretations do not apply. this field is fundamental to many areas of electrical engineering and computer science. in this textbook, we mostly use a few key ideas from information theory to characterize probability distributions or quantify similarity between probability distributions. for more detail on information theory, see cover and thomas 2006 mackay ( ) or ( ). 2003 the basic intuition behind information theory is that learning that an unlikely event has occurred is more informative than learning that a likely event has occurred. a message saying “ the sun rose this morning ” is so uninformative as to be unnecessary to send, but a message saying “ there was a solar eclipse this morning ” is very informative. we would like to quantify information
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
88
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
occurred. a message saying “ the sun rose this morning ” is so uninformative as to be unnecessary to send, but a message saying “ there was a solar eclipse this morning ” is very informative. we would like to quantify information in a way that formalizes this intuition. specifically, • likely events should have low information content, and in the extreme case, events that are guaranteed to happen should have no information content whatsoever. • less likely events should have higher information content. • independent events should have additive information. for example, finding out that a tossed coin has come up as heads twice should convey twice as much information as finding out that a tossed coin has come up as heads once. in order to satisfy all three of these properties, we define the self - information of an event x to be = x i x p x. ( ) = log − ( ) ( 3. 48 ) in this book, we always use log to mean the natural logarithm, with base e. our definition of i ( x ) is therefore written in units of nats. one nat is the amount of 73
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
88
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 3. probability and information theory information gained by observing an event of probability 1 e. other texts use base - 2 logarithms and units called bits or shannons ; information measured in bits is just a rescaling of information measured in nats. when x is continuous, we use the same definition of information by analogy, but some of the properties from the discrete case are lost. for example, an event with unit density still has zero information, despite not being an event that is guaranteed to occur. self - information deals only with a single outcome. we can quantify the amount of uncertainty in an entire probability distribution using the shannon entropy : h ( ) = x [UNK] [ ( ) ] = i x [UNK] [ log ( ) ] p x. ( 3. 49 ) also denoted h ( p ). in other words, the shannon entropy of a distribution is the expected amount of information in an event drawn from that distribution. it gives a lower bound on the number of bits ( if the logarithm is base 2, otherwise the units are [UNK] ) needed on average to encode symbols drawn from a distribution p. distributions that are nearly deterministic ( where the outcome is nearly certain ) have low entropy ; distributions that are closer
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
89
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
logarithm is base 2, otherwise the units are [UNK] ) needed on average to encode symbols drawn from a distribution p. distributions that are nearly deterministic ( where the outcome is nearly certain ) have low entropy ; distributions that are closer to uniform have high entropy. see figure for a demonstration. when 3. 5 x is continuous, the shannon entropy is known as the [UNK] entropy. if we have two separate probability distributions p ( x ) and q ( x ) over the same random variable x, we can measure how [UNK] these two distributions are using the kullback - leibler ( kl ) divergence : d kl ( ) = p q [UNK] log p x ( ) q x ( ) = [UNK] [ log ( ) log ( ) ] p x − q x. ( 3. 50 ) in the case of discrete variables, it is the extra amount of information ( measured in bits if we use the base logarithm, but in machine learning we usually use nats 2 and the natural logarithm ) needed to send a message containing symbols drawn from probability distribution p, when we use a code that was designed to minimize the length of messages drawn from probability distribution. q the kl divergence has many useful properties
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
89
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
and the natural logarithm ) needed to send a message containing symbols drawn from probability distribution p, when we use a code that was designed to minimize the length of messages drawn from probability distribution. q the kl divergence has many useful properties, most notably that it is non - negative. the kl divergence is 0 if and only if p and q are the same distribution in the case of discrete variables, or equal “ almost everywhere ” in the case of continuous variables. because the kl divergence is non - negative and measures the [UNK] between two distributions, it is often conceptualized as measuring some sort of distance between these distributions. however, it is not a true distance measure because it is not symmetric : dkl ( p q ) = dkl ( q p ) for some p and q. this 74
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
89
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 3. probability and information theory 0 0 0 2 0 4 0 6 0 8 1 0...... 0 0. 0 1. 0 2. 0 3. 0 4. 0 5. 0 6. 0 7. shannon entropy in nats figure 3. 5 : this plot shows how distributions that are closer to deterministic have low shannon entropy while distributions that are close to uniform have high shannon entropy. on the horizontal axis, we plotp, the probability of a binary random variable being equal to. the entropy is given by 1 ( p−1 ) log ( 1 −p ) −p p log. when p is near 0, the distribution is nearly deterministic, because the random variable is nearly always 0. whenp is near 1, the distribution is nearly deterministic, because the random variable is nearly always 1. when p = 0. 5, the entropy is maximal, because the distribution is uniform over the two outcomes. asymmetry means that there are important consequences to the choice of whether to use dkl ( ) p q or dkl ( ) q p. see figure for more detail. 3. 6 a quantity that is closely related to the kl divergence is the cross - entropy
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
90
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
the choice of whether to use dkl ( ) p q or dkl ( ) q p. see figure for more detail. 3. 6 a quantity that is closely related to the kl divergence is the cross - entropy h ( p, q ) = h ( p ) + dkl ( p q ), which is similar to the kl divergence but lacking the term on the left : h p, q ( ) = [UNK] log ( ) q x. ( 3. 51 ) minimizing the cross - entropy with respect to q is equivalent to minimizing the kl divergence, because does not participate in the omitted term. q when computing many of these quantities, it is common to encounter expres - sions of the form 0 log 0. by convention, in the context of information theory, we treat these expressions as limx→0 x x log = 0. 3. 14 structured probabilistic models machine learning algorithms often involve probability distributions over a very large number of random variables. often, these probability distributions involve direct interactions between relatively few variables. using a single function to 75
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
90
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 3. probability and information theory x probability density q∗ = argminqdkl ( ) p q p x ( ) q∗ ( ) x x probability density q∗ = argminqdkl ( ) q p p ( ) x q∗ ( ) x figure 3. 6 : the kl divergence is asymmetric. suppose we have a distributionp ( x ) and wish to approximate it with another distribution q ( x ). we have the choice of minimizing either dkl ( p q ) or dkl ( q p ). we illustrate the [UNK] of this choice using a mixture of two gaussians for p, and a single gaussian for q. the choice of which direction of the kl divergence to use is problem - dependent. some applications require an approximation that usually places high probability anywhere that the true distribution places high probability, while other applications require an approximation that rarely places high probability anywhere that the true distribution places low probability. the choice of the direction of the kl divergence reflects which of these considerations takes priority for each application. ( left ) the [UNK] of minimizing dkl ( p q ). in this case, we select a q that
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
91
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
choice of the direction of the kl divergence reflects which of these considerations takes priority for each application. ( left ) the [UNK] of minimizing dkl ( p q ). in this case, we select a q that has high probability where p has high probability. when p has multiple modes, q chooses to blur the modes together, in order to put high probability mass on all of them. ( right ) the [UNK] of minimizing dkl ( q p ). in this case, we select a q that has low probability where p has low probability. when p has multiple modes that are [UNK] widely separated, as in this figure, the kl divergence is minimized by choosing a single mode, in order to avoid putting probability mass in the low - probability areas between modes ofp. here, we illustrate the outcome when q is chosen to emphasize the left mode. we could also have achieved an equal value of the kl divergence by choosing the right mode. if the modes are not separated by a [UNK] strong low probability region, then this direction of the kl divergence can still choose to blur the modes. 76
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
91
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 3. probability and information theory describe the entire joint probability distribution can be very [UNK] ( both computationally and statistically ). instead of using a single function to represent a probability distribution, we can split a probability distribution into many factors that we multiply together. for example, suppose we have three random variables : a, b and c. suppose that a influences the value of b and b influences the value of c, but that a and c are independent given b. we can represent the probability distribution over all three variables as a product of probability distributions over two variables : p,, p p p. ( a b c ) = ( ) a ( ) b a | ( ) c b | ( 3. 52 ) these factorizations can greatly reduce the number of parameters needed to describe the distribution. each factor uses a number of parameters that is exponential in the number of variables in the factor. this means that we can greatly reduce the cost of representing a distribution if we are able to find a factorization into distributions over fewer variables. we can describe these kinds of factorizations using graphs. here we use the word “ graph ” in the sense of graph theory : a set of vertices that may be connected to each other with edges.
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
92
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
##nd a factorization into distributions over fewer variables. we can describe these kinds of factorizations using graphs. here we use the word “ graph ” in the sense of graph theory : a set of vertices that may be connected to each other with edges. when we represent the factorization of a probability distribution with a graph, we call it a structured probabilistic model or graphical model. there are two main kinds of structured probabilistic models : directed and undirected. both kinds of graphical models use a graph g in which each node in the graph corresponds to a random variable, and an edge connecting two random variables means that the probability distribution is able to represent direct interactions between those two random variables. directed models use graphs with directed edges, and they represent fac - torizations into conditional probability distributions, as in the example above. specifically, a directed model contains one factor for every random variable xi in the distribution, and that factor consists of the conditional distribution over xi given the parents of xi, denoted pag ( xi ) : p ( ) = x i p ( xi | pag ( xi ) ). ( 3. 53 ) see figure for an example of a directed graph and the factorization of probability 3.
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
92
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
xi, denoted pag ( xi ) : p ( ) = x i p ( xi | pag ( xi ) ). ( 3. 53 ) see figure for an example of a directed graph and the factorization of probability 3. 7 distributions it represents. undirected models use graphs with undirected edges, and they represent factorizations into a set of functions ; unlike in the directed case, these functions 77
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
92
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 3. probability and information theory a c b e d figure 3. 7 : a directed graphical model over random variables a, b, c, d and e. this graph corresponds to probability distributions that can be factored as p,,,, p p p, p p. ( a b c d e ) = ( ) a ( ) b a | ( c a | b ) ( ) d b | ( ) e c | ( 3. 54 ) this graph allows us to quickly see some properties of the distribution. for example, a and c interact directly, but a and e interact only indirectly via c. are usually not probability distributions of any kind. any set of nodes that are all connected to each other in g is called a clique. each clique c ( ) i in an undirected model is associated with a factor φ ( ) i ( c ( ) i ). these factors are just functions, not probability distributions. the output of each factor must be non - negative, but there is no constraint that the factor must sum or integrate to 1 like a probability distribution. the probability of a configuration of random variables is proportional to the product of all of these factors — assignments that result in larger factor values are more likely
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
93
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
there is no constraint that the factor must sum or integrate to 1 like a probability distribution. the probability of a configuration of random variables is proportional to the product of all of these factors — assignments that result in larger factor values are more likely. of course, there is no guarantee that this product will sum to 1. we therefore divide by a normalizing constant z, defined to be the sum or integral over all states of the product of the φ functions, in order to obtain a normalized probability distribution : p ( ) = x 1 z i φ ( ) i c ( ) i. ( 3. 55 ) see figure for an example of an undirected graph and the factorization of 3. 8 probability distributions it represents. keep in mind that these graphical representations of factorizations are a language for describing probability distributions. they are not mutually exclusive families of probability distributions. being directed or undirected is not a property of a probability distribution ; it is a property of a particular description of a 78
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
93
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 3. probability and information theory a c b e d figure 3. 8 : an undirected graphical model over random variablesa, b, c, d and e. this graph corresponds to probability distributions that can be factored as p,,,, ( a b c d e ) = 1 zφ ( 1 ) ( ) a b c,, φ ( 2 ) ( ) b d, φ ( 3 ) ( ) c e,. ( 3. 56 ) this graph allows us to quickly see some properties of the distribution. for example, a and c interact directly, but a and e interact only indirectly via c. probability distribution, but any probability distribution may be described in both ways. throughout parts and of this book, we will use structured probabilistic i ii models merely as a language to describe which direct probabilistic relationships [UNK] machine learning algorithms choose to represent. no further understanding of structured probabilistic models is needed until the discussion of research topics, in part, where we will explore structured probabilistic models in much greater iii detail. this chapter has reviewed the basic concepts of probability theory that are most relevant to deep learning. one more set of fundamental mathematical tools remains : numerical methods. 79
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
94
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 4 numerical computation machine learning algorithms usually require a high amount of numerical compu - tation. this typically refers to algorithms that solve mathematical problems by methods that update estimates of the solution via an iterative process, rather than analytically deriving a formula providing a symbolic expression for the correct so - lution. common operations include optimization ( finding the value of an argument that minimizes or maximizes a function ) and solving systems of linear equations. even just evaluating a mathematical function on a digital computer can be [UNK] when the function involves real numbers, which cannot be represented precisely using a finite amount of memory. 4. 1 overflow and underflow the fundamental [UNK] in performing continuous math on a digital computer is that we need to represent infinitely many real numbers with a finite number of bit patterns. this means that for almost all real numbers, we incur some approximation error when we represent the number in the computer. in many cases, this is just rounding error. rounding error is problematic, especially when it compounds across many operations, and can cause algorithms that work in theory to fail in practice if they are not designed to minimize the accumulation of rounding error. one form of rounding error that is particularly devastating is underflow.
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
95
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
is problematic, especially when it compounds across many operations, and can cause algorithms that work in theory to fail in practice if they are not designed to minimize the accumulation of rounding error. one form of rounding error that is particularly devastating is underflow. underflow occurs when numbers near zero are rounded to zero. many functions behave qualitatively [UNK] when their argument is zero rather than a small positive number. for example, we usually want to avoid division by zero ( some 80
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
95
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 4. numerical computation software environments will raise exceptions when this occurs, others will return a result with a placeholder not - a - number value ) or taking the logarithm of zero ( this is usually treated as −∞, which then becomes not - a - number if it is used for many further arithmetic operations ). another highly damaging form of numerical error is overflow. overflow occurs when numbers with large magnitude are approximated as ∞or −∞. further arithmetic will usually change these infinite values into not - a - number values. one example of a function that must be stabilized against underflow and overflow is the softmax function. the softmax function is often used to predict the probabilities associated with a multinoulli distribution. the softmax function is defined to be softmax ( ) x i = exp ( xi ) n j = 1 exp ( xj ). ( 4. 1 ) consider what happens when all of the xi are equal to some constant c. analytically, we can see that all of the outputs should be equal to 1 n. numerically, this may not occur when c has large magnitude. if c is very negative, then exp ( c
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
96
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
xi are equal to some constant c. analytically, we can see that all of the outputs should be equal to 1 n. numerically, this may not occur when c has large magnitude. if c is very negative, then exp ( c ) will underflow. this means the denominator of the softmax will become 0, so the final result is undefined. when c is very large and positive, exp ( c ) will overflow, again resulting in the expression as a whole being undefined. both of these [UNK] can be resolved by instead evaluating softmax ( z ) where z = x −maxi xi. simple algebra shows that the value of the softmax function is not changed analytically by adding or subtracting a scalar from the input vector. subtracting maxi xi results in the largest argument to exp being 0, which rules out the possibility of overflow. likewise, at least one term in the denominator has a value of 1, which rules out the possibility of underflow in the denominator leading to a division by zero. there is still one small problem. underflow in the numerator can still cause the expression as a whole to evaluate
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
96
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
value of 1, which rules out the possibility of underflow in the denominator leading to a division by zero. there is still one small problem. underflow in the numerator can still cause the expression as a whole to evaluate to zero. this means that if we implement log softmax ( x ) by first running the softmax subroutine then passing the result to the log function, we could erroneously obtain −∞. instead, we must implement a separate function that calculates log softmax in a numerically stable way. the log softmax function can be stabilized using the same trick as we used to stabilize the function. softmax for the most part, we do not explicitly detail all of the numerical considerations involved in implementing the various algorithms described in this book. developers of low - level libraries should keep numerical issues in mind when implementing deep learning algorithms. most readers of this book can simply rely on low - level libraries that provide stable implementations. in some cases, it is possible to implement a new algorithm and have the new implementation automatically 81
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
96
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 4. numerical computation stabilized. theano (, ;, ) is an example bergstra et al. 2010 bastien et al. 2012 of a software package that automatically detects and stabilizes many common numerically unstable expressions that arise in the context of deep learning. 4. 2 poor conditioning conditioning refers to how rapidly a function changes with respect to small changes in its inputs. functions that change rapidly when their inputs are perturbed slightly can be problematic for scientific computation because rounding errors in the inputs can result in large changes in the output. consider the function f ( x ) = a−1x. when a ∈rn n × has an eigenvalue decomposition, its condition number is max i, j λi λj. ( 4. 2 ) this is the ratio of the magnitude of the largest and smallest eigenvalue. when this number is large, matrix inversion is particularly sensitive to error in the input. this sensitivity is an intrinsic property of the matrix itself, not the result of rounding error during matrix inversion. poorly conditioned matrices amplify pre - existing errors when we multiply by the true matrix inverse. in practice, the error will be compounded further by numerical errors in the inversion process itself. 4. 3
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
97
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
result of rounding error during matrix inversion. poorly conditioned matrices amplify pre - existing errors when we multiply by the true matrix inverse. in practice, the error will be compounded further by numerical errors in the inversion process itself. 4. 3 gradient - based optimization most deep learning algorithms involve optimization of some sort. optimization refers to the task of either minimizing or maximizing some function f ( x ) by altering x. we usually phrase most optimization problems in terms of minimizing f ( x ). maximization may be accomplished via a minimization algorithm by minimizing −f ( ) x. the function we want to minimize or maximize is called the objective func - tion or criterion. when we are minimizing it, we may also call it the cost function, loss function, or error function. in this book, we use these terms interchangeably, though some machine learning publications assign special meaning to some of these terms. we often denote the value that minimizes or maximizes a function with a superscript. for example, we might say ∗ x∗ = arg min ( ) f x. 82
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
97
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 4. numerical computation − − − − 2 0. 1 5. 1 0. 0 5 0 0 0 5 1 0 1 5 2 0...... x −2 0. −1 5. −1 0. −0 5. 0 0. 0 5. 1 0. 1 5. 2 0. global minimum at = 0. x since f ( ) = 0, gradient x descent halts here. for 0, we have x < f ( ) 0, x < so we can decrease by f moving rightward. for 0, we have x > f ( ) 0, x > so we can decrease by f moving leftward. f x ( ) = 1 2 x2 f ( ) = x x figure 4. 1 : an illustration of how the gradient descent algorithm uses the derivatives of a function can be used to follow the function downhill to a minimum. we assume the reader is already familiar with calculus, but provide a brief review of how calculus concepts relate to optimization here. suppose we have a function y = f ( x ), where both x and y are real numbers. the derivative of this function is denoted as f ( x ) or as dy dx. the derivative f ( x ) gives the slope
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
98
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
. suppose we have a function y = f ( x ), where both x and y are real numbers. the derivative of this function is denoted as f ( x ) or as dy dx. the derivative f ( x ) gives the slope of f ( x ) at the point x. in other words, it specifies how to scale a small change in the input in order to obtain the corresponding change in the output : f x f x f ( + ) ≈ ( ) + ( ) x. the derivative is therefore useful for minimizing a function because it tells us how to change x in order to make a small improvement in y. for example, we know that f ( x − sign ( f ( x ) ) ) is less than f ( x ) for small enough. we can thus reduce f ( x ) by moving x in small steps with opposite sign of the derivative. this technique is called gradient descent ( cauchy 1847, ). see figure for an 4. 1 example of this technique. when f ( x ) = 0, the derivative provides no information about which direction to move. points where f ( x ) = 0 are known as critical points or stationary points. a local minimum is a point where f
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
98
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
1 example of this technique. when f ( x ) = 0, the derivative provides no information about which direction to move. points where f ( x ) = 0 are known as critical points or stationary points. a local minimum is a point where f ( x ) is lower than at all neighboring points, so it is no longer possible to decrease f ( x ) by making infinitesimal steps. a local maximum is a point where f ( x ) is higher than at all neighboring points, 83
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
98
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 4. numerical computation minimum maximum saddle point figure 4. 2 : examples of each of the three types of critical points in 1 - d. a critical point is a point with zero slope. such a point can either be a local minimum, which is lower than the neighboring points, a local maximum, which is higher than the neighboring points, or a saddle point, which has neighbors that are both higher and lower than the point itself. so it is not possible to increase f ( x ) by making infinitesimal steps. some critical points are neither maxima nor minima. these are known as saddle points. see figure for examples of each type of critical point. 4. 2 a point that obtains the absolute lowest value of f ( x ) is a global minimum. it is possible for there to be only one global minimum or multiple global minima of the function. it is also possible for there to be local minima that are not globally optimal. in the context of deep learning, we optimize functions that may have many local minima that are not optimal, and many saddle points surrounded by very flat regions. all of this makes optimization very [UNK], especially when the input to the function is multidimensional. we
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
99
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
we optimize functions that may have many local minima that are not optimal, and many saddle points surrounded by very flat regions. all of this makes optimization very [UNK], especially when the input to the function is multidimensional. we therefore usually settle for finding a value of f that is very low, but not necessarily minimal in any formal sense. see figure for an example. 4. 3 we often minimize functions that have multiple inputs : f : rn →r. for the concept of “ minimization ” to make sense, there must still be only one ( scalar ) output. for functions with multiple inputs, we must make use of the concept of partial derivatives. the partial derivative ∂ ∂xi f ( x ) measures how f changes as only the variable xi increases at point x. the gradient generalizes the notion of derivative to the case where the derivative is with respect to a vector : the gradient of f is the vector containing all of the partial derivatives, denoted ∇xf ( x ). element i of the gradient is the partial derivative of f with respect to xi. in multiple dimensions, 84
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
99
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 4. numerical computation x f x ( ) ideally, we would like to arrive at the global minimum, but this might not be possible. this local minimum performs nearly as well as the global one, so it is an acceptable halting point. this local minimum performs poorly and should be avoided. figure 4. 3 : optimization algorithms may fail to find a global minimum when there are multiple local minima or plateaus present. in the context of deep learning, we generally accept such solutions even though they are not truly minimal, so long as they correspond to significantly low values of the cost function. critical points are points where every element of the gradient is equal to zero. the directional derivative in direction ( a unit vector ) is the slope of the u function f in direction u. in other words, the directional derivative is the derivative of the function f ( x + αu ) with respect to α, evaluated at α = 0. using the chain rule, we can see that ∂ ∂αf α ( + x u ) evaluates to u∇xf α ( ) x when = 0. to minimize f, we would like to find the direction in which f decreases the fastest. we can do this using the directional derivative : min
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
100
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
α ( + x u ) evaluates to u∇xf α ( ) x when = 0. to minimize f, we would like to find the direction in which f decreases the fastest. we can do this using the directional derivative : min u u, u = 1 u∇xf ( ) x ( 4. 3 ) = min u u, u = 1 | | | | u 2 | | ∇xf ( ) x | | 2 cos θ ( 4. 4 ) where θ is the angle between u and the gradient. substituting in | | | | u 2 = 1 and ignoring factors that do not depend on u, this simplifies to minu cos θ. this is minimized when u points in the opposite direction as the gradient. in other words, the gradient points directly uphill, and the negative gradient points directly downhill. we can decrease f by moving in the direction of the negative gradient. this is known as the or. method of steepest descent gradient descent steepest descent proposes a new point x = x −∇ xf ( ) x ( 4. 5 ) 85
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
100
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 4. numerical computation where is the learning rate, a positive scalar determining the size of the step. we can choose in several [UNK] ways. a popular approach is to set to a small constant. sometimes, we can solve for the step size that makes the directional derivative vanish. another approach is to evaluate f ( x −∇xf ( ) ) x for several values of and choose the one that results in the smallest objective function value. this last strategy is called a line search. steepest descent converges when every element of the gradient is zero ( or, in practice, very close to zero ). in some cases, we may be able to avoid running this iterative algorithm, and just jump directly to the critical point by solving the equation ∇xf ( ) = 0 x for. x although gradient descent is limited to optimization in continuous spaces, the general concept of repeatedly making a small move ( that is approximately the best small move ) towards better configurations can be generalized to discrete spaces. ascending an objective function of discrete parameters is called hill climbing (, ). russel and norvig 2003 4. 3. 1 beyond the gradient : jacobian and hessian matrices sometimes we need to find all of the partial derivatives of a function
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
101
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
an objective function of discrete parameters is called hill climbing (, ). russel and norvig 2003 4. 3. 1 beyond the gradient : jacobian and hessian matrices sometimes we need to find all of the partial derivatives of a function whose input and output are both vectors. the matrix containing all such partial derivatives is known as a jacobian matrix. specifically, if we have a function f : rm →rn, then the jacobian matrix j ∈rn m × of is defined such that f ji, j = ∂ ∂xjf ( ) x i. we are also sometimes interested in a derivative of a derivative. this is known as a second derivative. for example, for a function f : rn →r, the derivative with respect to xi of the derivative of f with respect to xj is denoted as ∂2 ∂xi∂xj f. in a single dimension, we can denote d2 dx2 f by f ( x ). the second derivative tells us how the first derivative will change as we vary the input. this is important because it tells us whether a gradient step will cause as much of an improvement as we would expect based on the gradient alone. we can think of the second derivative as
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
101
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
us how the first derivative will change as we vary the input. this is important because it tells us whether a gradient step will cause as much of an improvement as we would expect based on the gradient alone. we can think of the second derivative as measuring curvature. suppose we have a quadratic function ( many functions that arise in practice are not quadratic but can be approximated well as quadratic, at least locally ). if such a function has a second derivative of zero, then there is no curvature. it is a perfectly flat line, and its value can be predicted using only the gradient. if the gradient is, then we can make a step of size 1 along the negative gradient, and the cost function will decrease by. if the second derivative is negative, the function curves downward, so the cost function will actually decrease by more than. finally, if the second derivative is positive, the function curves upward, so the cost function can decrease by less than. see 86
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
101
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 4. numerical computation x f x ( ) negative curvature x f x ( ) no curvature x f x ( ) positive curvature figure 4. 4 : the second derivative determines the curvature of a function. here we show quadratic functions with various curvature. the dashed line indicates the value of the cost function we would expect based on the gradient information alone as we make a gradient step downhill. in the case of negative curvature, the cost function actually decreases faster than the gradient predicts. in the case of no curvature, the gradient predicts the decrease correctly. in the case of positive curvature, the function decreases slower than expected and eventually begins to increase, so steps that are too large can actually increase the function inadvertently. figure to see how [UNK] forms of curvature [UNK] the relationship between 4. 4 the value of the cost function predicted by the gradient and the true value. when our function has multiple input dimensions, there are many second derivatives. these derivatives can be collected together into a matrix called the hessian matrix. the hessian matrix is defined such that h x ( ) ( f ) h x ( ) ( f ) i, j = ∂2 ∂xi∂xj f. ( ) x ( 4. 6 ) equivalently, the hessian
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
102
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
hessian matrix is defined such that h x ( ) ( f ) h x ( ) ( f ) i, j = ∂2 ∂xi∂xj f. ( ) x ( 4. 6 ) equivalently, the hessian is the jacobian of the gradient. anywhere that the second partial derivatives are continuous, the [UNK] operators are commutative, i. e. their order can be swapped : ∂2 ∂xi∂xj f ( ) = x ∂2 ∂x j∂xi f. ( ) x ( 4. 7 ) this implies that hi, j = h j, i, so the hessian matrix is symmetric at such points. most of the functions we encounter in the context of deep learning have a symmetric hessian almost everywhere. because the hessian matrix is real and symmetric, we can decompose it into a set of real eigenvalues and an orthogonal basis of 87
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
102
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 4. numerical computation eigenvectors. the second derivative in a specific direction represented by a unit vector d is given by dhd. when d is an eigenvector of h, the second derivative in that direction is given by the corresponding eigenvalue. for other directions of d, the directional second derivative is a weighted average of all of the eigenvalues, with weights between 0 and 1, and eigenvectors that have smaller angle with d receiving more weight. the maximum eigenvalue determines the maximum second derivative and the minimum eigenvalue determines the minimum second derivative. the ( directional ) second derivative tells us how well we can expect a gradient descent step to perform. we can make a second - order taylor series approximation to the function around the current point f ( ) x x ( 0 ) : f f ( ) x ≈ ( x ( 0 ) ) + ( x x − ( 0 ) ) g + 1 2 ( x x − ( 0 ) ) h x x ( − ( 0 ) ). ( 4. 8 ) where g is the gradient and h is the hessian at x ( 0 ). if we use a learning rate of, then the new point x will be given by
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
103
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
) h x x ( − ( 0 ) ). ( 4. 8 ) where g is the gradient and h is the hessian at x ( 0 ). if we use a learning rate of, then the new point x will be given by x ( 0 ) −g. substituting this into our approximation, we obtain f ( x ( 0 ) − ≈ g ) f ( x ( 0 ) ) −g g + 1 22ghg. ( 4. 9 ) there are three terms here : the original value of the function, the expected improvement due to the slope of the function, and the correction we must apply to account for the curvature of the function. when this last term is too large, the gradient descent step can actually move uphill. when ghg is zero or negative, the taylor series approximation predicts that increasing forever will decrease f forever. in practice, the taylor series is unlikely to remain accurate for large, so one must resort to more heuristic choices of in this case. when ghg is positive, solving for the optimal step size that decreases the taylor series approximation of the function the most yields ∗ = gg ghg. ( 4. 10 ) in the worst case, when g aligns with the eigenvector
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
103
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
ghg is positive, solving for the optimal step size that decreases the taylor series approximation of the function the most yields ∗ = gg ghg. ( 4. 10 ) in the worst case, when g aligns with the eigenvector of h corresponding to the maximal eigenvalue λmax, then this optimal step size is given by 1 λ max. to the extent that the function we minimize can be approximated well by a quadratic function, the eigenvalues of the hessian thus determine the scale of the learning rate. the second derivative can be used to determine whether a critical point is a local maximum, a local minimum, or saddle point. recall that on a critical point, f ( x ) = 0. when the second derivative f ( x ) > 0, the first derivative f ( x ) increases as we move to the right and decreases as we move to the left. this means 88
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
103
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 4. numerical computation f ( x − ) < 0 and f ( x + ) > 0 for small enough. in other words, as we move right, the slope begins to point uphill to the right, and as we move left, the slope begins to point uphill to the left. thus, when f ( x ) = 0 and f ( x ) > 0, we can conclude that x is a local minimum. similarly, when f ( x ) = 0 and f ( x ) < 0, we can conclude that x is a local maximum. this is known as the second derivative test. unfortunately, when f ( x ) = 0, the test is inconclusive. in this case x may be a saddle point, or a part of a flat region. in multiple dimensions, we need to examine all of the second derivatives of the function. using the eigendecomposition of the hessian matrix, we can generalize the second derivative test to multiple dimensions. at a critical point, where ∇xf ( x ) = 0, we can examine the eigenvalues of the hessian to determine whether the critical point is a local maximum, local minimum, or saddle point. when the hessian is positive defini
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
104
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
, where ∇xf ( x ) = 0, we can examine the eigenvalues of the hessian to determine whether the critical point is a local maximum, local minimum, or saddle point. when the hessian is positive definite ( all its eigenvalues are positive ), the point is a local minimum. this can be seen by observing that the directional second derivative in any direction must be positive, and making reference to the univariate second derivative test. likewise, when the hessian is negative definite ( all its eigenvalues are negative ), the point is a local maximum. in multiple dimensions, it is actually possible to find positive evidence of saddle points in some cases. when at least one eigenvalue is positive and at least one eigenvalue is negative, we know that x is a local maximum on one cross section of f but a local minimum on another cross section. see figure for an example. finally, the multidimensional second 4. 5 derivative test can be inconclusive, just like the univariate version. the test is inconclusive whenever all of the non - zero eigenvalues have the same sign, but at least one eigen
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
104
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
##ensional second 4. 5 derivative test can be inconclusive, just like the univariate version. the test is inconclusive whenever all of the non - zero eigenvalues have the same sign, but at least one eigenvalue is zero. this is because the univariate second derivative test is inconclusive in the cross section corresponding to the zero eigenvalue. in multiple dimensions, there is a [UNK] second derivative for each direction at a single point. the condition number of the hessian at this point measures how much the second derivatives [UNK] from each other. when the hessian has a poor condition number, gradient descent performs poorly. this is because in one direction, the derivative increases rapidly, while in another direction, it increases slowly. gradient descent is unaware of this change in the derivative so it does not know that it needs to explore preferentially in the direction where the derivative remains negative for longer. it also makes it [UNK] to choose a good step size. the step size must be small enough to avoid overshooting the minimum and going uphill in directions with strong positive curvature. this usually means that the step size is too small to make significant progress in other directions with less curvature. see figur
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
104
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
must be small enough to avoid overshooting the minimum and going uphill in directions with strong positive curvature. this usually means that the step size is too small to make significant progress in other directions with less curvature. see figure for an example. 4. 6 this issue can be resolved by using information from the hessian matrix to guide 89
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
104
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 4. numerical computation figure 4. 5 : a saddle point containing both positive and negative curvature. the function in this example is f ( x ) = x2 1 −x2 2. along the axis corresponding to x1, the function curves upward. this axis is an eigenvector of the hessian and has a positive eigenvalue. along the axis corresponding to x2, the function curves downward. this direction is an eigenvector of the hessian with negative eigenvalue. the name “ saddle point ” derives from the saddle - like shape of this function. this is the quintessential example of a function with a saddle point. in more than one dimension, it is not necessary to have an eigenvalue of 0 in order to get a saddle point : it is only necessary to have both positive and negative eigenvalues. we can think of a saddle point with both signs of eigenvalues as being a local maximum within one cross section and a local minimum within another cross section. 90
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
105
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 4. numerical computation − − − 30 20 10 0 10 20 x1 −30 −20 −10 0 10 20 x2 figure 4. 6 : gradient descent fails to exploit the curvature information contained in the hessian matrix. here we use gradient descent to minimize a quadratic functionf ( x ) whose hessian matrix has condition number 5. this means that the direction of most curvature has five times more curvature than the direction of least curvature. in this case, the most curvature is in the direction [ 1, 1 ] and the least curvature is in the direction [ 1, −1 ]. the red lines indicate the path followed by gradient descent. this very elongated quadratic function resembles a long canyon. gradient descent wastes time repeatedly descending canyon walls, because they are the steepest feature. because the step size is somewhat too large, it has a tendency to overshoot the bottom of the function and thus needs to descend the opposite canyon wall on the next iteration. the large positive eigenvalue of the hessian corresponding to the eigenvector pointed in this direction indicates that this directional derivative is rapidly increasing, so an optimization algorithm based on the hessian could predict that the steepest direction is not actually a promising search direction in this context.
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
106
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
the hessian corresponding to the eigenvector pointed in this direction indicates that this directional derivative is rapidly increasing, so an optimization algorithm based on the hessian could predict that the steepest direction is not actually a promising search direction in this context. 91
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
106
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 4. numerical computation the search. the simplest method for doing so is known as newton ’ s method. newton ’ s method is based on using a second - order taylor series expansion to approximate near some point f ( ) x x ( 0 ) : f f ( ) x ≈ ( x ( 0 ) ) + ( x x − ( 0 ) ) ∇xf ( x ( 0 ) ) + 1 2 ( x x − ( 0 ) ) h x ( ) ( f ( 0 ) ) ( x x − ( 0 ) ). ( 4. 11 ) if we then solve for the critical point of this function, we obtain : x∗ = x ( 0 ) −h x ( ) ( f ( 0 ) ) −1∇xf ( x ( 0 ) ). ( 4. 12 ) when f is a positive definite quadratic function, newton ’ s method consists of applying equation once to jump to the minimum of the function directly. 4. 12 when f is not truly quadratic but can be locally approximated as a positive definite quadratic, newton ’ s method consists of applying equation multiple 4. 12 times. iteratively updating the approximation and jumping to the minimum of the approximation can reach the
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
107
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
quadratic but can be locally approximated as a positive definite quadratic, newton ’ s method consists of applying equation multiple 4. 12 times. iteratively updating the approximation and jumping to the minimum of the approximation can reach the critical point much faster than gradient descent would. this is a useful property near a local minimum, but it can be a harmful property near a saddle point. as discussed in section, newton ’ s method is 8. 2. 3 only appropriate when the nearby critical point is a minimum ( all the eigenvalues of the hessian are positive ), whereas gradient descent is not attracted to saddle points unless the gradient points toward them. optimization algorithms that use only the gradient, such as gradient descent, are called first - order optimization algorithms. optimization algorithms that also use the hessian matrix, such as newton ’ s method, are called second - order optimization algorithms ( nocedal and wright 2006, ). the optimization algorithms employed in most contexts in this book are applicable to a wide variety of functions, but come with almost no guarantees. deep learning algorithms tend to lack guarantees because the family of functions used in deep learning is quite complicated. in many other fields, the dominant approach to optimization is to design optimization algorithms
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
107
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
a wide variety of functions, but come with almost no guarantees. deep learning algorithms tend to lack guarantees because the family of functions used in deep learning is quite complicated. in many other fields, the dominant approach to optimization is to design optimization algorithms for a limited family of functions. in the context of deep learning, we sometimes gain some guarantees by restrict - ing ourselves to functions that are either lipschitz continuous or have lipschitz continuous derivatives. a lipschitz continuous function is a function f whose rate of change is bounded by a lipschitz constant l : [UNK] [UNK] | − | ≤l | | − | | x, y, f ( ) x f ( ) y x y 2. ( 4. 13 ) this property is useful because it allows us to quantify our assumption that a small change in the input made by an algorithm such as gradient descent will have 92
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
107
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 4. numerical computation a small change in the output. lipschitz continuity is also a fairly weak constraint, and many optimization problems in deep learning can be made lipschitz continuous with relatively minor modifications. perhaps the most successful field of specialized optimization is convex op - timization. convex optimization algorithms are able to provide many more guarantees by making stronger restrictions. convex optimization algorithms are applicable only to convex functions — functions for which the hessian is positive semidefinite everywhere. such functions are well - behaved because they lack saddle points and all of their local minima are necessarily global minima. however, most problems in deep learning are [UNK] to express in terms of convex optimization. convex optimization is used only as a subroutine of some deep learning algorithms. ideas from the analysis of convex optimization algorithms can be useful for proving the convergence of deep learning algorithms. however, in general, the importance of convex optimization is greatly diminished in the context of deep learning. for more information about convex optimization, see boyd and vandenberghe 2004 ( ) or rockafellar 1997 ( ). 4. 4 constrained optimization sometimes we wish not only to maximize or minimize a function f ( x ) over all possible values of x. instead we may wish
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
108
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
see boyd and vandenberghe 2004 ( ) or rockafellar 1997 ( ). 4. 4 constrained optimization sometimes we wish not only to maximize or minimize a function f ( x ) over all possible values of x. instead we may wish to find the maximal or minimal value of f ( x ) for values of x in some set s. this is known as constrained optimization. points x that lie within the set s are called feasible points in constrained optimization terminology. we often wish to find a solution that is small in some sense. a common approach in such situations is to impose a norm constraint, such as. | | | | ≤ x 1 one simple approach to constrained optimization is simply to modify gradient descent taking the constraint into account. if we use a small constant step size, we can make gradient descent steps, then project the result back into s. if we use a line search, we can search only over step sizes that yield new x points that are feasible, or we can project each point on the line back into the constraint region. when possible, this method can be made more [UNK] by projecting the gradient into the tangent space of the feasible region before taking the step or beginning the line search (, ). rosen 1960 a more sophisticated approach is
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
108
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
the line back into the constraint region. when possible, this method can be made more [UNK] by projecting the gradient into the tangent space of the feasible region before taking the step or beginning the line search (, ). rosen 1960 a more sophisticated approach is to design a [UNK], unconstrained opti - mization problem whose solution can be converted into a solution to the original, constrained optimization problem. for example, if we want to minimize f ( x ) for 93
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
108
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 4. numerical computation x ∈r2 with x constrained to have exactly unit l2 norm, we can instead minimize g ( θ ) = f ( [ cos sin θ, θ ] ) with respect to θ, then return [ cos sin θ, θ ] as the solution to the original problem. this approach requires creativity ; the transformation between optimization problems must be designed specifically for each case we encounter. the karush – kuhn – tucker ( kkt ) approach1 provides a very general so - lution to constrained optimization. with the kkt approach, we introduce a new function called the generalized lagrangian or generalized lagrange function. to define the lagrangian, we first need to describe s in terms of equations and inequalities. we want a description of s in terms of m functions g ( ) i and n functions h ( ) j so that s = { | [UNK] x i, g ( ) i ( x ) = 0 and [UNK], h ( ) j ( x ) ≤0 }. the equations involving g ( ) i are called the equality constraints and the inequalities involving h ( ) j are called. inequality constraints we introduce new variables λi and
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
109
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
0 and [UNK], h ( ) j ( x ) ≤0 }. the equations involving g ( ) i are called the equality constraints and the inequalities involving h ( ) j are called. inequality constraints we introduce new variables λi andα j for each constraint, these are called the kkt multipliers. the generalized lagrangian is then defined as l,, f ( x λ α ) = ( ) + x i λig ( ) i ( ) + x j αjh ( ) j ( ) x. ( 4. 14 ) we can now solve a constrained minimization problem using unconstrained optimization of the generalized lagrangian. observe that, so long as at least one feasible point exists and is not permitted to have value, then f ( ) x ∞ min x max λ max α α, ≥0 l,,. ( x λ α ) ( 4. 15 ) has the same optimal objective function value and set of optimal points as x min x∈s f. ( ) x ( 4. 16 ) this follows because any time the constraints are satisfied, max λ max α α, ≥0l,, f, ( x λ α ) = ( ) x (
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
109
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
min x∈s f. ( ) x ( 4. 16 ) this follows because any time the constraints are satisfied, max λ max α α, ≥0l,, f, ( x λ α ) = ( ) x ( 4. 17 ) while any time a constraint is violated, max λ max α α, ≥0 l,,. ( x λ α ) = ∞ ( 4. 18 ) 1the kkt approach generalizes the method of lagrange multipliers which allows equality constraints but not inequality constraints. 94
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
109
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 4. numerical computation these properties guarantee that no infeasible point can be optimal, and that the optimum within the feasible points is unchanged. to perform constrained maximization, we can construct the generalized la - grange function of, which leads to this optimization problem : −f ( ) x min x max λ max α α, ≥0−f ( ) + x i λig ( ) i ( ) + x j αjh ( ) j ( ) x. ( 4. 19 ) we may also convert this to a problem with maximization in the outer loop : max x min λ min α α, ≥0f ( ) + x i λig ( ) i ( ) x − j αjh ( ) j ( ) x. ( 4. 20 ) the sign of the term for the equality constraints does not matter ; we may define it with addition or subtraction as we wish, because the optimization is free to choose any sign for each λi. the inequality constraints are particularly interesting. we say that a constraint h ( ) i ( x ) is active if h ( ) i ( x∗ ) = 0. if a constraint is not active, then the solution to the problem found using that constraint would remain at least
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
110
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
particularly interesting. we say that a constraint h ( ) i ( x ) is active if h ( ) i ( x∗ ) = 0. if a constraint is not active, then the solution to the problem found using that constraint would remain at least a local solution if that constraint were removed. it is possible that an inactive constraint excludes other solutions. for example, a convex problem with an entire region of globally optimal points ( a wide, flat, region of equal cost ) could have a subset of this region eliminated by constraints, or a non - convex problem could have better local stationary points excluded by a constraint that is inactive at convergence. however, the point found at convergence remains a stationary point whether or not the inactive constraints are included. because an inactive h ( ) i has negative value, then the solution to minx maxλ maxα α, ≥0 l ( x λ α,, ) will have αi = 0. we can thus observe that at the solution, α h ( x ) = 0. in other words, for all i, we know that at least one of the constraints αi ≥0 and h ( ) i ( x ) ≤0 must be active at the solution. to gain some intuition for this idea, we can say
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
110
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
. in other words, for all i, we know that at least one of the constraints αi ≥0 and h ( ) i ( x ) ≤0 must be active at the solution. to gain some intuition for this idea, we can say that either the solution is on the boundary imposed by the inequality and we must use its kkt multiplier to influence the solution to x, or the inequality has no influence on the solution and we represent this by zeroing out its kkt multiplier. a simple set of properties describe the optimal points of constrained opti - mization problems. these properties are called the karush - kuhn - tucker ( kkt ) conditions (, ; karush 1939 kuhn and tucker 1951, ). they are necessary conditions, but not always [UNK] conditions, for a point to be optimal. the conditions are : • the gradient of the generalized lagrangian is zero. • all constraints on both and the kkt multipliers are satisfied. x 95
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
110
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 4. numerical computation • the inequality constraints exhibit “ complementary slackness ” : α h ( x ) = 0. for more information about the kkt approach, see nocedal and wright 2006 ( ). 4. 5 example : linear least squares suppose we want to find the value of that minimizes x f ( ) = x 1 2 | | − | | ax b 2 2. ( 4. 21 ) there are specialized linear algebra algorithms that can solve this problem [UNK]. however, we can also explore how to solve it using gradient - based optimization as a simple example of how these techniques work. first, we need to obtain the gradient : ∇x f ( ) = x a ( ) = ax b − aax a − b. ( 4. 22 ) we can then follow this gradient downhill, taking small steps. see algorithm 4. 1 for details. algorithm 4. 1 an algorithm to minimize f ( x ) = 1 2 | | − | | ax b 2 2 with respect to x using gradient descent, starting from an arbitrary value of. x set the step size ( ) and tolerance ( ) to small, positive numbers. δ while | | aax a − b | | 2 > δ do x x ← − aax a
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
111
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
using gradient descent, starting from an arbitrary value of. x set the step size ( ) and tolerance ( ) to small, positive numbers. δ while | | aax a − b | | 2 > δ do x x ← − aax a − b end while one can also solve this problem using newton ’ s method. in this case, because the true function is quadratic, the quadratic approximation employed by newton ’ s method is exact, and the algorithm converges to the global minimum in a single step. now suppose we wish to minimize the same function, but subject to the constraint xx ≤1. to do so, we introduce the lagrangian l, λ f λ ( x ) = ( ) + x xx −1. ( 4. 23 ) we can now solve the problem min x max λ, λ≥0l, λ. ( x ) ( 4. 24 ) 96
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
111
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 4. numerical computation the smallest - norm solution to the unconstrained least squares problem may be found using the moore - penrose pseudoinverse : x = a + b. if this point is feasible, then it is the solution to the constrained problem. otherwise, we must find a solution where the constraint is active. by [UNK] the lagrangian with respect to, we obtain the equation x aax a − b x + 2λ = 0. ( 4. 25 ) this tells us that the solution will take the form x a = ( a i + 2λ ) −1ab. ( 4. 26 ) the magnitude of λ must be chosen such that the result obeys the constraint. we can find this value by performing gradient ascent on. to do so, observe λ ∂ ∂λl, λ ( x ) = xx −1. ( 4. 27 ) when the norm of x exceeds 1, this derivative is positive, so to follow the derivative uphill and increase the lagrangian with respect to λ, we increase λ. because the [UNK] on the xx penalty has increased, solving the linear equation for x will now yield a solution with smaller norm. the process of solving the linear equation and adjusting λ continues until
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
112
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
lagrangian with respect to λ, we increase λ. because the [UNK] on the xx penalty has increased, solving the linear equation for x will now yield a solution with smaller norm. the process of solving the linear equation and adjusting λ continues until x has the correct norm and the derivative on λ is 0. this concludes the mathematical preliminaries that we use to develop machine learning algorithms. we are now ready to build and analyze some full - fledged learning systems. 97
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
112
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5 machine learning basics deep learning is a specific kind of machine learning. in order to understand deep learning well, one must have a solid understanding of the basic principles of machine learning. this chapter provides a brief course in the most important general principles that will be applied throughout the rest of the book. novice readers or those who want a wider perspective are encouraged to consider machine learning textbooks with a more comprehensive coverage of the fundamentals, such as murphy ( ) or ( ). if you are already familiar with machine learning basics, 2012 bishop 2006 feel free to skip ahead to section. that section covers some perspectives 5. 11 on traditional machine learning techniques that have strongly influenced the development of deep learning algorithms. we begin with a definition of what a learning algorithm is, and present an example : the linear regression algorithm. we then proceed to describe how the challenge of fitting the training data [UNK] from the challenge of finding patterns that generalize to new data. most machine learning algorithms have settings called hyperparameters that must be determined external to the learning algorithm itself ; we discuss how to set these using additional data. machine learning is essentially a form of applied statistics with increased emphasis on the use of computers to statistically estimate complicated functions and
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
113
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
called hyperparameters that must be determined external to the learning algorithm itself ; we discuss how to set these using additional data. machine learning is essentially a form of applied statistics with increased emphasis on the use of computers to statistically estimate complicated functions and a decreased emphasis on proving confidence intervals around these functions ; we therefore present the two central approaches to statistics : frequentist estimators and bayesian inference. most machine learning algorithms can be divided into the categories of supervised learning and unsupervised learning ; we describe these categories and give some examples of simple learning algorithms from each category. most deep learning algorithms are based on an optimization algorithm called stochastic gradient descent. we describe how to combine various algorithm components such as 98
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
113
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics an optimization algorithm, a cost function, a model, and a dataset to build a machine learning algorithm. finally, in section, we describe some of the 5. 11 factors that have limited the ability of traditional machine learning to generalize. these challenges have motivated the development of deep learning algorithms that overcome these obstacles. 5. 1 learning algorithms a machine learning algorithm is an algorithm that is able to learn from data. but what do we mean by learning? mitchell 1997 ( ) provides the definition “ a computer program is said to learn from experience e with respect to some class of tasks t and performance measure p, if its performance at tasks in t, as measured by p, improves with experience e. ” one can imagine a very wide variety of experiences e, tasks t, and performance measures p, and we do not make any attempt in this book to provide a formal definition of what may be used for each of these entities. instead, the following sections provide intuitive descriptions and examples of the [UNK] kinds of tasks, performance measures and experiences that can be used to construct machine learning algorithms. 5. 1. 1 the task, t machine learning allows us to tackle tasks that are too [UNK] to solve with fixed programs
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
114
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
descriptions and examples of the [UNK] kinds of tasks, performance measures and experiences that can be used to construct machine learning algorithms. 5. 1. 1 the task, t machine learning allows us to tackle tasks that are too [UNK] to solve with fixed programs written and designed by human beings. from a scientific and philosophical point of view, machine learning is interesting because developing our understanding of machine learning entails developing our understanding of the principles that underlie intelligence. in this relatively formal definition of the word “ task, ” the process of learning itself is not the task. learning is our means of attaining the ability to perform the task. for example, if we want a robot to be able to walk, then walking is the task. we could program the robot to learn to walk, or we could attempt to directly write a program that specifies how to walk manually. machine learning tasks are usually described in terms of how the machine learning system should process an example. an example is a collection of features that have been quantitatively measured from some object or event that we want the machine learning system to process. we typically represent an example as a vector x ∈rn where each entry xi of the vector is another feature. for example, the features of
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
114
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
features that have been quantitatively measured from some object or event that we want the machine learning system to process. we typically represent an example as a vector x ∈rn where each entry xi of the vector is another feature. for example, the features of an image are usually the values of the pixels in the image. 99
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
114
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics many kinds of tasks can be solved with machine learning. some of the most common machine learning tasks include the following : • classification : in this type of task, the computer program is asked to specify which of k categories some input belongs to. to solve this task, the learning algorithm is usually asked to produce a function f : rn → { 1,..., k }. when y = f ( x ), the model assigns an input described by vector x to a category identified by numeric code y. there are other variants of the classification task, for example, where f outputs a probability distribution over classes. an example of a classification task is object recognition, where the input is an image ( usually described as a set of pixel brightness values ), and the output is a numeric code identifying the object in the image. for example, the willow garage pr2 robot is able to act as a waiter that can recognize [UNK] kinds of drinks and deliver them to people on command ( good - fellow 2010 et al., ). modern object recognition is best accomplished with deep learning (, ;, ). object krizhevsky et al. 2012 [UNK] and sze
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
115
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
[UNK] kinds of drinks and deliver them to people on command ( good - fellow 2010 et al., ). modern object recognition is best accomplished with deep learning (, ;, ). object krizhevsky et al. 2012 [UNK] and szegedy 2015 recognition is the same basic technology that allows computers to recognize faces ( taigman 2014 et al., ), which can be used to automatically tag people in photo collections and allow computers to interact more naturally with their users. • classification with missing inputs : classification becomes more chal - lenging if the computer program is not guaranteed that every measurement in its input vector will always be provided. in order to solve the classification task, the learning algorithm only has to define a function mapping single from a vector input to a categorical output. when some of the inputs may be missing, rather than providing a single classification function, the learning algorithm must learn a of functions. each function corresponds to classi - set fying x with a [UNK] subset of its inputs missing. this kind of situation arises frequently in medical diagnosis, because many kinds of medical tests are expensive or invasive. one way to [UNK] define such a large set of functions is to learn
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
115
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
set fying x with a [UNK] subset of its inputs missing. this kind of situation arises frequently in medical diagnosis, because many kinds of medical tests are expensive or invasive. one way to [UNK] define such a large set of functions is to learn a probability distribution over all of the relevant variables, then solve the classification task by marginalizing out the missing variables. with n input variables, we can now obtain all 2n [UNK] classifi - cation functions needed for each possible set of missing inputs, but we only need to learn a single function describing the joint probability distribution. see goodfellow 2013b et al. ( ) for an example of a deep probabilistic model applied to such a task in this way. many of the other tasks described in this section can also be generalized to work with missing inputs ; classification with missing inputs is just one example of what machine learning can do. 100
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
115
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics • regression : in this type of task, the computer program is asked to predict a numerical value given some input. to solve this task, the learning algorithm is asked to output a function f : rn →r. this type of task is similar to classification, except that the format of output is [UNK]. an example of a regression task is the prediction of the expected claim amount that an insured person will make ( used to set insurance premiums ), or the prediction of future prices of securities. these kinds of predictions are also used for algorithmic trading. • transcription : in this type of task, the machine learning system is asked to observe a relatively unstructured representation of some kind of data and transcribe it into discrete, textual form. for example, in optical character recognition, the computer program is shown a photograph containing an image of text and is asked to return this text in the form of a sequence of characters ( e. g., in ascii or unicode format ). google street view uses deep learning to process address numbers in this way (, goodfellow et al. 2014d ). another example is speech recognition, where the computer program is provided an audio waveform and emits a sequence
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
116
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
unicode format ). google street view uses deep learning to process address numbers in this way (, goodfellow et al. 2014d ). another example is speech recognition, where the computer program is provided an audio waveform and emits a sequence of characters or word id codes describing the words that were spoken in the audio recording. deep learning is a crucial component of modern speech recognition systems used at major companies including microsoft, ibm and google (, hinton et al. 2012b ). • machine translation : in a machine translation task, the input already consists of a sequence of symbols in some language, and the computer program must convert this into a sequence of symbols in another language. this is commonly applied to natural languages, such as translating from english to french. deep learning has recently begun to have an important impact on this kind of task ( sutskever 2014 bahdanau 2015 et al., ; et al., ). • structured output : structured output tasks involve any task where the output is a vector ( or other data structure containing multiple values ) with important relationships between the [UNK] elements. this is a broad category, and subsumes the transcription and translation tasks described above, but also many other tasks. one example is parsing — mapping a natural
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
116
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
( or other data structure containing multiple values ) with important relationships between the [UNK] elements. this is a broad category, and subsumes the transcription and translation tasks described above, but also many other tasks. one example is parsing — mapping a natural language sentence into a tree that describes its grammatical structure and tagging nodes of the trees as being verbs, nouns, or adverbs, and so on. see ( ) for an example of deep learning applied to a parsing collobert 2011 task. another example is pixel - wise segmentation of images, where the computer program assigns every pixel in an image to a specific category. for 101
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
116
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics example, deep learning can be used to annotate the locations of roads in aerial photographs ( mnih and hinton 2010, ). the output need not have its form mirror the structure of the input as closely as in these annotation - style tasks. for example, in image captioning, the computer program observes an image and outputs a natural language sentence describing the image ( kiros et al. et al.,, ; 2014a b mao, ; 2015 vinyals 2015b donahue 2014 et al., ; et al., ; karpathy and li 2015 fang 2015 xu 2015, ; et al., ; et al., ). these tasks are called structured output tasks because the program must output several values that are all tightly inter - related. for example, the words produced by an image captioning program must form a valid sentence. • anomaly detection : in this type of task, the computer program sifts through a set of events or objects, and flags some of them as being unusual or atypical. an example of an anomaly detection task is credit card fraud detection. by modeling your purchasing habits, a credit card company can detect misuse of your cards. if a thief
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
117
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
objects, and flags some of them as being unusual or atypical. an example of an anomaly detection task is credit card fraud detection. by modeling your purchasing habits, a credit card company can detect misuse of your cards. if a thief steals your credit card or credit card information, the thief ’ s purchases will often come from a [UNK] probability distribution over purchase types than your own. the credit card company can prevent fraud by placing a hold on an account as soon as that card has been used for an uncharacteristic purchase. see ( ) for a chandola et al. 2009 survey of anomaly detection methods. • synthesis and sampling : in this type of task, the machine learning al - gorithm is asked to generate new examples that are similar to those in the training data. synthesis and sampling via machine learning can be useful for media applications where it can be expensive or boring for an artist to generate large volumes of content by hand. for example, video games can automatically generate textures for large objects or landscapes, rather than requiring an artist to manually label each pixel (, ). in some luo et al. 2013 cases, we want the sampling or synthesis procedure to generate some specific kind of output given the input. for example
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
117
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
or landscapes, rather than requiring an artist to manually label each pixel (, ). in some luo et al. 2013 cases, we want the sampling or synthesis procedure to generate some specific kind of output given the input. for example, in a speech synthesis task, we provide a written sentence and ask the program to emit an audio waveform containing a spoken version of that sentence. this is a kind of structured output task, but with the added qualification that there is no single correct output for each input, and we explicitly desire a large amount of variation in the output, in order for the output to seem more natural and realistic. • imputation of missing values : in this type of task, the machine learning algorithm is given a new example x ∈rn, but with some entries xi of x missing. the algorithm must provide a prediction of the values of the missing entries. 102
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
117
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics • denoising : in this type of task, the machine learning algorithm is given in input a corrupted example [UNK] ∈rn obtained by an unknown corruption process from a clean example x ∈rn. the learner must predict the clean example x from its corrupted version [UNK], or more generally predict the conditional probability distribution p ( x | [UNK] ). • density estimation or probability mass function estimation : in the density estimation problem, the machine learning algorithm is asked to learn a function pmodel : rn →r, where pmodel ( x ) can be interpreted as a probability density function ( if x is continuous ) or a probability mass function ( if x is discrete ) on the space that the examples were drawn from. to do such a task well ( we will specify exactly what that means when we discuss performance measures p ), the algorithm needs to learn the structure of the data it has seen. it must know where examples cluster tightly and where they are unlikely to occur. most of the tasks described above require the learning algorithm to at least implicitly capture the structure of the probability distribution. density estimation allows us to explicitly capture that distribution. in principle, we can then perform computations on that distribution in order to solve the other tasks as well.
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
118
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
above require the learning algorithm to at least implicitly capture the structure of the probability distribution. density estimation allows us to explicitly capture that distribution. in principle, we can then perform computations on that distribution in order to solve the other tasks as well. for example, if we have performed density estimation to obtain a probability distribution p ( x ), we can use that distribution to solve the missing value imputation task. if a value xi is missing and all of the other values, denoted x−i, are given, then we know the distribution over it is given by p ( xi | x−i ). in practice, density estimation does not always allow us to solve all of these related tasks, because in many cases the required operations on p ( x ) are computationally intractable. of course, many other tasks and types of tasks are possible. the types of tasks we list here are intended only to provide examples of what machine learning can do, not to define a rigid taxonomy of tasks. 5. 1. 2 the performance measure, p in order to evaluate the abilities of a machine learning algorithm, we must design a quantitative measure of its performance. usually this performance measure p is specific to the task being carried out by the system. t
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
118
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
. 2 the performance measure, p in order to evaluate the abilities of a machine learning algorithm, we must design a quantitative measure of its performance. usually this performance measure p is specific to the task being carried out by the system. t for tasks such as classification, classification with missing inputs, and tran - scription, we often measure the accuracy of the model. accuracy is just the proportion of examples for which the model produces the correct output. we can 103
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
118
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics also obtain equivalent information by measuring the error rate, the proportion of examples for which the model produces an incorrect output. we often refer to the error rate as the expected 0 - 1 loss. the 0 - 1 loss on a particular example is 0 if it is correctly classified and 1 if it is not. for tasks such as density estimation, it does not make sense to measure accuracy, error rate, or any other kind of 0 - 1 loss. instead, we must use a [UNK] performance metric that gives the model a continuous - valued score for each example. the most common approach is to report the average log - probability the model assigns to some examples. usually we are interested in how well the machine learning algorithm performs on data that it has not seen before, since this determines how well it will work when deployed in the real world. we therefore evaluate these performance measures using a test set of data that is separate from the data used for training the machine learning system. the choice of performance measure may seem straightforward and objective, but it is often [UNK] to choose a performance measure that corresponds well to the desired behavior of the system. in some cases, this is because it is [UNK] to decide what should be measured. for example, when performing
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
119
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
measure may seem straightforward and objective, but it is often [UNK] to choose a performance measure that corresponds well to the desired behavior of the system. in some cases, this is because it is [UNK] to decide what should be measured. for example, when performing a transcription task, should we measure the accuracy of the system at transcribing entire sequences, or should we use a more fine - grained performance measure that gives partial credit for getting some elements of the sequence correct? when performing a regression task, should we penalize the system more if it frequently makes medium - sized mistakes or if it rarely makes very large mistakes? these kinds of design choices depend on the application. in other cases, we know what quantity we would ideally like to measure, but measuring it is impractical. for example, this arises frequently in the context of density estimation. many of the best probabilistic models represent probability distributions only implicitly. computing the actual probability value assigned to a specific point in space in many such models is intractable. in these cases, one must design an alternative criterion that still corresponds to the design objectives, or design a good approximation to the desired criterion. 5. 1. 3 the experience, e machine learning algorithms can be broadly categorized as
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
119
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
models is intractable. in these cases, one must design an alternative criterion that still corresponds to the design objectives, or design a good approximation to the desired criterion. 5. 1. 3 the experience, e machine learning algorithms can be broadly categorized as unsupervised or supervised by what kind of experience they are allowed to have during the learning process. most of the learning algorithms in this book can be understood as being allowed to experience an entire dataset. a dataset is a collection of many examples, as 104
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
119
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics defined in section. sometimes we will also call examples. 5. 1. 1 data points one of the oldest datasets studied by statisticians and machine learning re - searchers is the iris dataset (, ). it is a collection of measurements of fisher 1936 [UNK] parts of 150 iris plants. each individual plant corresponds to one example. the features within each example are the measurements of each of the parts of the plant : the sepal length, sepal width, petal length and petal width. the dataset also records which species each plant belonged to. three [UNK] species are represented in the dataset. unsupervised learning algorithms experience a dataset containing many features, then learn useful properties of the structure of this dataset. in the context of deep learning, we usually want to learn the entire probability distribution that generated a dataset, whether explicitly as in density estimation or implicitly for tasks like synthesis or denoising. some other unsupervised learning algorithms perform other roles, like clustering, which consists of dividing the dataset into clusters of similar examples. supervised learning algorithms experience a dataset containing features, but each example is also associated with a label or target. for example,
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
120
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
##vised learning algorithms perform other roles, like clustering, which consists of dividing the dataset into clusters of similar examples. supervised learning algorithms experience a dataset containing features, but each example is also associated with a label or target. for example, the iris dataset is annotated with the species of each iris plant. a supervised learning algorithm can study the iris dataset and learn to classify iris plants into three [UNK] species based on their measurements. roughly speaking, unsupervised learning involves observing several examples of a random vector x, and attempting to implicitly or explicitly learn the proba - bility distribution p ( x ), or some interesting properties of that distribution, while supervised learning involves observing several examples of a random vector x and an associated value or vector y, and learning to predict y from x, usually by estimating p ( y x | ). the term supervised learning originates from the view of the target y being provided by an instructor or teacher who shows the machine learning system what to do. in unsupervised learning, there is no instructor or teacher, and the algorithm must learn to make sense of the data without this guide. unsupervised learning and supervised learning are not formally defined terms. the lines between
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
120
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
in unsupervised learning, there is no instructor or teacher, and the algorithm must learn to make sense of the data without this guide. unsupervised learning and supervised learning are not formally defined terms. the lines between them are often blurred. many machine learning technologies can be used to perform both tasks. for example, the chain rule of probability states that for a vector x ∈rn, the joint distribution can be decomposed as p ( ) = x n i = 1 p ( xi | x1,..., xi−1 ). ( 5. 1 ) this decomposition means that we can solve the ostensibly unsupervised problem of modeling p ( x ) by splitting it into n supervised learning problems. alternatively, we 105
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
120
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics can solve the supervised learning problem of learning p ( y | x ) by using traditional unsupervised learning technologies to learn the joint distribution p ( x, y ) and inferring p y ( | x ) = p, y ( x ) yp, y ( x ). ( 5. 2 ) though unsupervised learning and supervised learning are not completely formal or distinct concepts, they do help to roughly categorize some of the things we do with machine learning algorithms. traditionally, people refer to regression, classification and structured output problems as supervised learning. density estimation in support of other tasks is usually considered unsupervised learning. other variants of the learning paradigm are possible. for example, in semi - supervised learning, some examples include a supervision target but others do not. in multi - instance learning, an entire collection of examples is labeled as containing or not containing an example of a class, but the individual members of the collection are not labeled. for a recent example of multi - instance learning with deep models, see kotzias 2015 et al. ( ). some machine learning algorithms do not just experience a fixed dataset. for example, reinforcement learning algorithms interact with an environment
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
121
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0