text
stringlengths
35
1.54k
source
stringclasses
1 value
page
int64
1
800
book
stringclasses
1 value
chunk_index
int64
0
0
labeled. for a recent example of multi - instance learning with deep models, see kotzias 2015 et al. ( ). some machine learning algorithms do not just experience a fixed dataset. for example, reinforcement learning algorithms interact with an environment, so there is a feedback loop between the learning system and its experiences. such algorithms are beyond the scope of this book. please see ( ) sutton and barto 1998 or bertsekas and tsitsiklis 1996 ( ) for information about reinforcement learning, and ( ) for the deep learning approach to reinforcement learning. mnih et al. 2013 most machine learning algorithms simply experience a dataset. a dataset can be described in many ways. in all cases, a dataset is a collection of examples, which are in turn collections of features. one common way of describing a dataset is with a. a design design matrix matrix is a matrix containing a [UNK] example in each row. each column of the matrix corresponds to a [UNK] feature. for instance, the iris dataset contains 150 examples with four features for each example. this means we can represent the dataset with a design matrix x ∈r150 4 ×, where xi, 1 is the sepal length of plant i, xi, 2 is the sep
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
121
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
##set contains 150 examples with four features for each example. this means we can represent the dataset with a design matrix x ∈r150 4 ×, where xi, 1 is the sepal length of plant i, xi, 2 is the sepal width of plant i, etc. we will describe most of the learning algorithms in this book in terms of how they operate on design matrix datasets. of course, to describe a dataset as a design matrix, it must be possible to describe each example as a vector, and each of these vectors must be the same size. this is not always possible. for example, if you have a collection of photographs with [UNK] widths and heights, then [UNK] photographs will contain [UNK] numbers of pixels, so not all of the photographs may be described with the same length of vector. section and chapter describe how to handle [UNK] 9. 7 10 106
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
121
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics types of such heterogeneous data. in cases like these, rather than describing the dataset as a matrix with m rows, we will describe it as a set containing m elements : { x ( 1 ), x ( 2 ),..., x ( ) m }. this notation does not imply that any two example vectors x ( ) i and x ( ) j have the same size. in the case of supervised learning, the example contains a label or target as well as a collection of features. for example, if we want to use a learning algorithm to perform object recognition from photographs, we need to specify which object appears in each of the photos. we might do this with a numeric code, with 0 signifying a person, 1 signifying a car, 2 signifying a cat, etc. often when working with a dataset containing a design matrix of feature observations x, we also provide a vector of labels, with y yi providing the label for example. i of course, sometimes the label may be more than just a single number. for example, if we want to train a speech recognition system to transcribe entire sentences, then the label for each example sentence is a sequence of words. just as
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
122
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
i of course, sometimes the label may be more than just a single number. for example, if we want to train a speech recognition system to transcribe entire sentences, then the label for each example sentence is a sequence of words. just as there is no formal definition of supervised and unsupervised learning, there is no rigid taxonomy of datasets or experiences. the structures described here cover most cases, but it is always possible to design new ones for new applications. 5. 1. 4 example : linear regression our definition of a machine learning algorithm as an algorithm that is capable of improving a computer program ’ s performance at some task via experience is somewhat abstract. to make this more concrete, we present an example of a simple machine learning algorithm : linear regression. we will return to this example repeatedly as we introduce more machine learning concepts that help to understand its behavior. as the name implies, linear regression solves a regression problem. in other words, the goal is to build a system that can take a vector x ∈rn as input and predict the value of a scalar y ∈r as its output. in the case of linear regression, the output is a linear function of the input. let [UNK] be the value that
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
122
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
a system that can take a vector x ∈rn as input and predict the value of a scalar y ∈r as its output. in the case of linear regression, the output is a linear function of the input. let [UNK] be the value that our model predicts should take on. we define the output to be y [UNK] = wx ( 5. 3 ) where w ∈rn is a vector of. parameters parameters are values that control the behavior of the system. in this case, wi is the [UNK] that we multiply by feature xi before summing up the contributions from all the features. we can think of w as a set of weights that determine how each feature [UNK] the prediction. if a feature xi receives a positive weight wi, 107
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
122
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics then increasing the value of that feature increases the value of our prediction [UNK]. if a feature receives a negative weight, then increasing the value of that feature decreases the value of our prediction. if a feature ’ s weight is large in magnitude, then it has a large [UNK] on the prediction. if a feature ’ s weight is zero, it has no [UNK] on the prediction. we thus have a definition of our task t : to predict y from x by outputting [UNK] = wx. next we need a definition of our performance measure,. p suppose that we have a design matrix of m example inputs that we will not use for training, only for evaluating how well the model performs. we also have a vector of regression targets providing the correct value of y for each of these examples. because this dataset will only be used for evaluation, we call it the test set. we refer to the design matrix of inputs as x ( ) test and the vector of regression targets as y ( ) test. one way of measuring the performance of the model is to compute the mean squared error of the model on the test set. if [UNK] ( ) test gives the predictions of the model on the test set, then the mean
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
123
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
targets as y ( ) test. one way of measuring the performance of the model is to compute the mean squared error of the model on the test set. if [UNK] ( ) test gives the predictions of the model on the test set, then the mean squared error is given by msetest = 1 m i ( [UNK] ( ) test −y ( ) test ) 2 i. ( 5. 4 ) intuitively, one can see that this error measure decreases to 0 when [UNK] ( ) test = y ( ) test. we can also see that msetest = 1 m | | [UNK] ( ) test −y ( ) test | | 2 2, ( 5. 5 ) so the error increases whenever the euclidean distance between the predictions and the targets increases. to make a machine learning algorithm, we need to design an algorithm that will improve the weights w in a way that reduces msetest when the algorithm is allowed to gain experience by observing a training set ( x ( ) train, y ( ) train ). one intuitive way of doing this ( which we will justify later, in section ) is just to 5. 5. 1 minimize the mean squared error on the training set, msetrain. to minimize msetrain, we can simply solve for where its
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
123
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
intuitive way of doing this ( which we will justify later, in section ) is just to 5. 5. 1 minimize the mean squared error on the training set, msetrain. to minimize msetrain, we can simply solve for where its gradient is : 0 ∇wmsetrain = 0 ( 5. 6 ) ⇒∇w 1 m | | [UNK] ( ) train −y ( ) train | | 2 2 = 0 ( 5. 7 ) ⇒1 m∇w | | x ( ) train w y − ( ) train | | 2 2 = 0 ( 5. 8 ) 108
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
123
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics − − 1 0. 0 5 0 0 0 5 1 0.... x1 −3 −2 −1 0 1 2 3 y linear regression example 0 5 1 0 1 5... w1 0 20. 0 25. 0 30. 0 35. 0 40. 0 45. 0 50. 0 55. mse ( train ) optimization of w figure 5. 1 : a linear regression problem, with a training set consisting of ten data points, each containing one feature. because there is only one feature, the weight vector w contains only a single parameter to learn, w1. ( left ) observe that linear regression learns to set w1 such that the line y = w1x comes as close as possible to passing through all the training points. the plotted point indicates the value of ( right ) w1 found by the normal equations, which we can see minimizes the mean squared error on the training set. ⇒∇w x ( ) train w y − ( ) train x ( ) train w y − ( ) train = 0 ( 5. 9 ) ⇒∇w wx ( ) train x ( ) train w w −2 x ( ) train y ( ) train + y ( ) train y
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
124
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
( ) train x ( ) train w y − ( ) train = 0 ( 5. 9 ) ⇒∇w wx ( ) train x ( ) train w w −2 x ( ) train y ( ) train + y ( ) train y ( ) train = 0 ( 5. 10 ) ⇒2x ( ) train x ( ) train w x −2 ( ) train y ( ) train = 0 ( 5. 11 ) ⇒w = x ( ) train x ( ) train −1 x ( ) train y ( ) train ( 5. 12 ) the system of equations whose solution is given by equation is known as 5. 12 the normal equations. evaluating equation constitutes a simple learning 5. 12 algorithm. for an example of the linear regression learning algorithm in action, see figure. 5. 1 it is worth noting that the term linear regression is often used to refer to a slightly more sophisticated model with one additional parameter — an intercept term. in this model b [UNK] = wx + b ( 5. 13 ) so the mapping from parameters to predictions is still a linear function but the mapping from features to predictions is now an [UNK] function. this extension to [UNK] functions means that the plot of the model ’ s predictions still looks like a line, but
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
124
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
. 13 ) so the mapping from parameters to predictions is still a linear function but the mapping from features to predictions is now an [UNK] function. this extension to [UNK] functions means that the plot of the model ’ s predictions still looks like a line, but it need not pass through the origin. instead of adding the bias parameter 109
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
124
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics b, one can continue to use the model with only weights but augment x with an extra entry that is always set to. the weight corresponding to the extra entry 1 1 plays the role of the bias parameter. we will frequently use the term “ linear ” when referring to [UNK] functions throughout this book. the intercept term b is often called the bias parameter of the [UNK] transfor - mation. this terminology derives from the point of view that the output of the transformation is biased toward being b in the absence of any input. this term is [UNK] from the idea of a statistical bias, in which a statistical estimation algorithm ’ s expected estimate of a quantity is not equal to the true quantity. linear regression is of course an extremely simple and limited learning algorithm, but it provides an example of how a learning algorithm can work. in the subsequent sections we will describe some of the basic principles underlying learning algorithm design and demonstrate how these principles can be used to build more complicated learning algorithms. 5. 2 capacity, overfitting and underfitting the central challenge in machine learning is that we must perform well on new, previously unseen inputs — not just those on which our model was trained. the ability to perform well on previously unobserved inputs is called
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
125
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
, overfitting and underfitting the central challenge in machine learning is that we must perform well on new, previously unseen inputs — not just those on which our model was trained. the ability to perform well on previously unobserved inputs is called generalization. typically, when training a machine learning model, we have access to a training set, we can compute some error measure on the training set called the training error, and we reduce this training error. so far, what we have described is simply an optimization problem. what separates machine learning from optimization is that we want the generalization error, also called the test error, to be low as well. the generalization error is defined as the expected value of the error on a new input. here the expectation is taken across [UNK] possible inputs, drawn from the distribution of inputs we expect the system to encounter in practice. we typically estimate the generalization error of a machine learning model by measuring its performance on a test set of examples that were collected separately from the training set. in our linear regression example, we trained the model by minimizing the training error, 1 m ( ) train | | x ( ) train w y − ( ) train | | 2 2, ( 5. 14 ) but we actually
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
125
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
set. in our linear regression example, we trained the model by minimizing the training error, 1 m ( ) train | | x ( ) train w y − ( ) train | | 2 2, ( 5. 14 ) but we actually care about the test error, 1 m ( ) test | | x ( ) test w y − ( ) test | | 2 2. how can we [UNK] performance on the test set when we get to observe only the 110
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
125
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics training set? the field of statistical learning theory provides some answers. if the training and the test set are collected arbitrarily, there is indeed little we can do. if we are allowed to make some assumptions about how the training and test set are collected, then we can make some progress. the train and test data are generated by a probability distribution over datasets called the data generating process. we typically make a set of assumptions known collectively as the i. i. d. assumptions. these assumptions are that the examples in each dataset are independent from each other, and that the train set and test set are identically distributed, drawn from the same probability distribution as each other. this assumption allows us to describe the data gen - erating process with a probability distribution over a single example. the same distribution is then used to generate every train example and every test example. we call that shared underlying distribution the data generating distribution, denoted pdata. this probabilistic framework and the i. i. d. assumptions allow us to mathematically study the relationship between training error and test error. one immediate connection we can observe between the training and test error is that the expected training error of a randomly selected model is equal to the
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
126
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
the i. i. d. assumptions allow us to mathematically study the relationship between training error and test error. one immediate connection we can observe between the training and test error is that the expected training error of a randomly selected model is equal to the expected test error of that model. suppose we have a probability distribution p ( x, y ) and we sample from it repeatedly to generate the train set and the test set. for some fixed value w, the expected training set error is exactly the same as the expected test set error, because both expectations are formed using the same dataset sampling process. the only [UNK] between the two conditions is the name we assign to the dataset we sample. of course, when we use a machine learning algorithm, we do not fix the parameters ahead of time, then sample both datasets. we sample the training set, then use it to choose the parameters to reduce training set error, then sample the test set. under this process, the expected test error is greater than or equal to the expected value of training error. the factors determining how well a machine learning algorithm will perform are its ability to : 1. make the training error small. 2. make the gap between training and test error small. these two factors correspond to
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
126
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
equal to the expected value of training error. the factors determining how well a machine learning algorithm will perform are its ability to : 1. make the training error small. 2. make the gap between training and test error small. these two factors correspond to the two central challenges in machine learning : underfitting and overfitting. underfitting occurs when the model is not able to obtain a [UNK] low error value on the training set. overfitting occurs when the gap between the training error and test error is too large. we can control whether a model is more likely to overfit or underfit by altering its capacity. informally, a model ’ s capacity is its ability to fit a wide variety of 111
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
126
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics functions. models with low capacity may struggle to fit the training set. models with high capacity can overfit by memorizing properties of the training set that do not serve them well on the test set. one way to control the capacity of a learning algorithm is by choosing its hypothesis space, the set of functions that the learning algorithm is allowed to select as being the solution. for example, the linear regression algorithm has the set of all linear functions of its input as its hypothesis space. we can generalize linear regression to include polynomials, rather than just linear functions, in its hypothesis space. doing so increases the model ’ s capacity. a polynomial of degree one gives us the linear regression model with which we are already familiar, with prediction [UNK] b wx. = + ( 5. 15 ) by introducing x2 as another feature provided to the linear regression model, we can learn a model that is quadratic as a function of : x [UNK] b w = + 1x w + 2x2. ( 5. 16 ) though this model implements a quadratic function of its, the output is input still a linear function of the parameters, so we can still use the normal equations to train the model in closed form. we can continue
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
127
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
##x2. ( 5. 16 ) though this model implements a quadratic function of its, the output is input still a linear function of the parameters, so we can still use the normal equations to train the model in closed form. we can continue to add more powers of x as additional features, for example to obtain a polynomial of degree 9 : [UNK] b = + 9 i = 1 wixi. ( 5. 17 ) machine learning algorithms will generally perform best when their capacity is appropriate for the true complexity of the task they need to perform and the amount of training data they are provided with. models with [UNK] capacity are unable to solve complex tasks. models with high capacity can solve complex tasks, but when their capacity is higher than needed to solve the present task they may overfit. figure shows this principle in action. we compare a linear, quadratic 5. 2 and degree - 9 predictor attempting to fit a problem where the true underlying function is quadratic. the linear function is unable to capture the curvature in the true underlying problem, so it underfits. the degree - 9 predictor is capable of representing the correct function, but it is also capable of representing infinitely many other functions that pass exactly through the training points, because
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
127
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
in the true underlying problem, so it underfits. the degree - 9 predictor is capable of representing the correct function, but it is also capable of representing infinitely many other functions that pass exactly through the training points, because we 112
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
127
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics have more parameters than training examples. we have little chance of choosing a solution that generalizes well when so many wildly [UNK] solutions exist. in this example, the quadratic model is perfectly matched to the true structure of the task so it generalizes well to new data. figure 5. 2 : we fit three models to this example training set. the training data was generated synthetically, by randomly sampling x values and choosing y deterministically by evaluating a quadratic function. ( left ) a linear function fit to the data [UNK] from underfitting — it cannot capture the curvature that is present in the data. a ( center ) quadratic function fit to the data generalizes well to unseen points. it does not [UNK] from a significant amount of overfitting or underfitting. a polynomial of degree 9 fit to ( right ) the data [UNK] from overfitting. here we used the moore - penrose pseudoinverse to solve the underdetermined normal equations. the solution passes through all of the training points exactly, but we have not been lucky enough for it to extract the correct structure. it now has a deep valley in between two training points that does not appear in the true
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
128
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
##determined normal equations. the solution passes through all of the training points exactly, but we have not been lucky enough for it to extract the correct structure. it now has a deep valley in between two training points that does not appear in the true underlying function. it also increases sharply on the left side of the data, while the true function decreases in this area. so far we have described only one way of changing a model ’ s capacity : by changing the number of input features it has, and simultaneously adding new parameters associated with those features. there are in fact many ways of changing a model ’ s capacity. capacity is not determined only by the choice of model. the model specifies which family of functions the learning algorithm can choose from when varying the parameters in order to reduce a training objective. this is called the representational capacity of the model. in many cases, finding the best function within this family is a very [UNK] optimization problem. in practice, the learning algorithm does not actually find the best function, but merely one that significantly reduces the training error. these additional limitations, such as 113
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
128
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics the imperfection of the optimization algorithm, mean that the learning algorithm ’ s [UNK] capacity may be less than the representational capacity of the model family. our modern ideas about improving the generalization of machine learning models are refinements of thought dating back to philosophers at least as early as ptolemy. many early scholars invoke a principle of parsimony that is now most widely known as occam ’ s razor ( c. 1287 - 1347 ). this principle states that among competing hypotheses that explain known observations equally well, one should choose the “ simplest ” one. this idea was formalized and made more precise in the 20th century by the founders of statistical learning theory ( vapnik and chervonenkis 1971 vapnik 1982 blumer 1989 vapnik 1995, ;, ; et al., ;, ). statistical learning theory provides various means of quantifying model capacity. among these, the most well - known is the vapnik - chervonenkis dimension, or vc dimension. the vc dimension measures the capacity of a binary classifier. the vc dimension is defined as being the largest possible value of m for which there exists a training set of m [UNK]
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
129
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
- chervonenkis dimension, or vc dimension. the vc dimension measures the capacity of a binary classifier. the vc dimension is defined as being the largest possible value of m for which there exists a training set of m [UNK] x points that the classifier can label arbitrarily. quantifying the capacity of the model allows statistical learning theory to make quantitative predictions. the most important results in statistical learning theory show that the discrepancy between training error and generalization error is bounded from above by a quantity that grows as the model capacity grows but shrinks as the number of training examples increases ( vapnik and chervonenkis, 1971 vapnik 1982 blumer 1989 vapnik 1995 ;, ; et al., ;, ). these bounds provide intellectual justification that machine learning algorithms can work, but they are rarely used in practice when working with deep learning algorithms. this is in part because the bounds are often quite loose and in part because it can be quite [UNK] to determine the capacity of deep learning algorithms. the problem of determining the capacity of a deep learning model is especially [UNK] because the [UNK] capacity is limited by the capabilities of the optimization algorithm, and we have little theoretical understanding of the very
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
129
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
can be quite [UNK] to determine the capacity of deep learning algorithms. the problem of determining the capacity of a deep learning model is especially [UNK] because the [UNK] capacity is limited by the capabilities of the optimization algorithm, and we have little theoretical understanding of the very general non - convex optimization problems involved in deep learning. we must remember that while simpler functions are more likely to generalize ( to have a small gap between training and test error ) we must still choose a [UNK] complex hypothesis to achieve low training error. typically, training error decreases until it asymptotes to the minimum possible error value as model capacity increases ( assuming the error measure has a minimum value ). typically, generalization error has a u - shaped curve as a function of model capacity. this is illustrated in figure. 5. 3 to reach the most extreme case of arbitrarily high capacity, we introduce 114
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
129
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics 0 optimal capacity capacity error underfitting zone overfitting zone generalization gap training error generalization error figure 5. 3 : typical relationship between capacity and error. training and test error behave [UNK]. at the left end of the graph, training error and generalization error are both high. this is the underfitting regime. as we increase capacity, training error decreases, but the gap between training and generalization error increases. eventually, the size of this gap outweighs the decrease in training error, and we enter theoverfitting regime, where capacity is too large, above the optimal capacity. the concept of non - parametric models. so far, we have seen only parametric models, such as linear regression. parametric models learn a function described by a parameter vector whose size is finite and fixed before any data is observed. non - parametric models have no such limitation. sometimes, non - parametric models are just theoretical abstractions ( such as an algorithm that searches over all possible probability distributions ) that cannot be implemented in practice. however, we can also design practical non - parametric models by making their complexity a function of the training set size. one example of such an algorithm is nearest neighbor regression.
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
130
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
that searches over all possible probability distributions ) that cannot be implemented in practice. however, we can also design practical non - parametric models by making their complexity a function of the training set size. one example of such an algorithm is nearest neighbor regression. unlike linear regression, which has a fixed - length vector of weights, the nearest neighbor regression model simply stores the x and y from the training set. when asked to classify a test point x, the model looks up the nearest entry in the training set and returns the associated regression target. in other words, [UNK] = yi where i = arg min | | xi, : − | | x 2 2. the algorithm can also be generalized to distance metrics other than the l2 norm, such as learned distance metrics (, ). if the algorithm is goldberger et al. 2005 allowed to break ties by averaging the yi values for all xi, : that are tied for nearest, then this algorithm is able to achieve the minimum possible training error ( which might be greater than zero, if two identical inputs are associated with [UNK] outputs ) on any regression dataset. finally, we can also create a non - parametric learning algorithm by wrapping a 115
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
130
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics parametric learning algorithm inside another algorithm that increases the number of parameters as needed. for example, we could imagine an outer loop of learning that changes the degree of the polynomial learned by linear regression on top of a polynomial expansion of the input. the ideal model is an oracle that simply knows the true probability distribution that generates the data. even such a model will still incur some error on many problems, because there may still be some noise in the distribution. in the case of supervised learning, the mapping from x to y may be inherently stochastic, or y may be a deterministic function that involves other variables besides those included in x. the error incurred by an oracle making predictions from the true distribution is called the p, y ( x ) bayes error. training and generalization error vary as the size of the training set varies. expected generalization error can never increase as the number of training examples increases. for non - parametric models, more data yields better generalization until the best possible error is achieved. any fixed parametric model with less than optimal capacity will asymptote to an error value that exceeds the bayes error. see figure for an illustration. note that it is possible for the model to
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
131
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
best possible error is achieved. any fixed parametric model with less than optimal capacity will asymptote to an error value that exceeds the bayes error. see figure for an illustration. note that it is possible for the model to have optimal 5. 4 capacity and yet still have a large gap between training and generalization error. in this situation, we may be able to reduce this gap by gathering more training examples. 5. 2. 1 the no free lunch theorem learning theory claims that a machine learning algorithm can generalize well from a finite training set of examples. this seems to contradict some basic principles of logic. inductive reasoning, or inferring general rules from a limited set of examples, is not logically valid. to logically infer a rule describing every member of a set, one must have information about every member of that set. in part, machine learning avoids this problem by [UNK] only probabilistic rules, rather than the entirely certain rules used in purely logical reasoning. machine learning promises to find rules that are probably most correct about members of the set they concern. unfortunately, even this does not resolve the entire problem. the no free lunch theorem for machine learning ( wolpert 1996, ) states
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
131
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
logical reasoning. machine learning promises to find rules that are probably most correct about members of the set they concern. unfortunately, even this does not resolve the entire problem. the no free lunch theorem for machine learning ( wolpert 1996, ) states that, averaged over all possible data generating distributions, every classification algorithm has the same error rate when classifying previously unobserved points. in other words, in some sense, no machine learning algorithm is universally any better than any other. the most sophisticated algorithm we can conceive of has the same average 116
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
131
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics figure 5. 4 : the [UNK] of the training dataset size on the train and test error, as well as on the optimal model capacity. we constructed a synthetic regression problem based on adding a moderate amount of noise to a degree - 5 polynomial, generated a single test set, and then generated several [UNK] sizes of training set. for each size, we generated 40 [UNK] training sets in order to plot error bars showing 95 percent confidence intervals. ( top ) the mse on the training and test set for two [UNK] models : a quadratic model, and a model with degree chosen to minimize the test error. both are fit in closed form. for the quadratic model, the training error increases as the size of the training set increases. this is because larger datasets are harder to fit. simultaneously, the test error decreases, because fewer incorrect hypotheses are consistent with the training data. the quadratic model does not have enough capacity to solve the task, so its test error asymptotes to a high value. the test error at optimal capacity asymptotes to the bayes error. the training error can fall below the bayes error, due to the ability of the training algorithm
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
132
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
so its test error asymptotes to a high value. the test error at optimal capacity asymptotes to the bayes error. the training error can fall below the bayes error, due to the ability of the training algorithm to memorize specific instances of the training set. as the training size increases to infinity, the training error of any fixed - capacity model ( here, the quadratic model ) must rise to at least the bayes error. as the training set size increases, the optimal capacity ( bottom ) ( shown here as the degree of the optimal polynomial regressor ) increases. the optimal capacity plateaus after reaching [UNK] complexity to solve the task. 117
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
132
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics performance ( over all possible tasks ) as merely predicting that every point belongs to the same class. fortunately, these results hold only when we average over possible data all generating distributions. if we make assumptions about the kinds of probability distributions we encounter in real - world applications, then we can design learning algorithms that perform well on these distributions. this means that the goal of machine learning research is not to seek a universal learning algorithm or the absolute best learning algorithm. instead, our goal is to understand what kinds of distributions are relevant to the “ real world ” that an ai agent experiences, and what kinds of machine learning algorithms perform well on data drawn from the kinds of data generating distributions we care about. 5. 2. 2 regularization the no free lunch theorem implies that we must design our machine learning algorithms to perform well on a specific task. we do so by building a set of preferences into the learning algorithm. when these preferences are aligned with the learning problems we ask the algorithm to solve, it performs better. so far, the only method of modifying a learning algorithm that we have discussed concretely is to increase or decrease the model ’ s representational capacity by adding or removing functions from the hypothesis space of solutions the learning algorithm is able to
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
133
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
it performs better. so far, the only method of modifying a learning algorithm that we have discussed concretely is to increase or decrease the model ’ s representational capacity by adding or removing functions from the hypothesis space of solutions the learning algorithm is able to choose. we gave the specific example of increasing or decreasing the degree of a polynomial for a regression problem. the view we have described so far is oversimplified. the behavior of our algorithm is strongly [UNK] not just by how large we make the set of functions allowed in its hypothesis space, but by the specific identity of those functions. the learning algorithm we have studied so far, linear regression, has a hypothesis space consisting of the set of linear functions of its input. these linear functions can be very useful for problems where the relationship between inputs and outputs truly is close to linear. they are less useful for problems that behave in a very nonlinear fashion. for example, linear regression would not perform very well if we tried to use it to predict sin ( x ) from x. we can thus control the performance of our algorithms by choosing what kind of functions we allow them to draw solutions from, as well as by controlling the amount of these functions. we can also give a learning algorithm a
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
133
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
predict sin ( x ) from x. we can thus control the performance of our algorithms by choosing what kind of functions we allow them to draw solutions from, as well as by controlling the amount of these functions. we can also give a learning algorithm a preference for one solution in its hypothesis space to another. this means that both functions are eligible, but one is preferred. the unpreferred solution will be chosen only if it fits the training 118
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
133
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics data significantly better than the preferred solution. for example, we can modify the training criterion for linear regression to include weight decay. to perform linear regression with weight decay, we minimize a sum comprising both the mean squared error on the training and a criterion j ( w ) that expresses a preference for the weights to have smaller squaredl2 norm. specifically, j ( ) = w msetrain + λww, ( 5. 18 ) where λ is a value chosen ahead of time that controls the strength of our preference for smaller weights. when λ = 0, we impose no preference, and larger λ forces the weights to become smaller. minimizing j ( w ) results in a choice of weights that make a [UNK] fitting the training data and being small. this gives us solutions that have a smaller slope, or put weight on fewer of the features. as an example of how we can control a model ’ s tendency to overfit or underfit via weight decay, we can train a high - degree polynomial regression model with [UNK] values of. see figure for the results. λ 5. 5 figure 5. 5 : we fit a high - degree polynomial regression model to our
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
134
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
##fit via weight decay, we can train a high - degree polynomial regression model with [UNK] values of. see figure for the results. λ 5. 5 figure 5. 5 : we fit a high - degree polynomial regression model to our example training set from figure. the true function is quadratic, but here we use only models with degree 9. 5. 2 we vary the amount of weight decay to prevent these high - degree models from overfitting. ( left ) with very large λ, we can force the model to learn a function with no slope at all. this underfits because it can only represent a constant function. with a ( center ) medium value of, the learning algorithm recovers a curve with the right general shape. λ even though the model is capable of representing functions with much more complicated shape, weight decay has encouraged it to use a simpler function described by smaller [UNK]. with weight decay approaching zero ( i. e., using the moore - penrose ( right ) pseudoinverse to solve the underdetermined problem with minimal regularization ), the degree - 9 polynomial overfits significantly, as we saw in figure. 5. 2 119
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
134
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics more generally, we can regularize a model that learns a function f ( x ; θ ) by adding a penalty called a regularizer to the cost function. in the case of weight decay, the regularizer is ω ( w ) = w w. in chapter, we will see that many other 7 regularizers are possible. expressing preferences for one function over another is a more general way of controlling a model ’ s capacity than including or excluding members from the hypothesis space. we can think of excluding a function from a hypothesis space as expressing an infinitely strong preference against that function. in our weight decay example, we expressed our preference for linear functions defined with smaller weights explicitly, via an extra term in the criterion we minimize. there are many other ways of expressing preferences for [UNK] solutions, both implicitly and explicitly. together, these [UNK] approaches are known as regularization. regularization is any modification we make to a learning algorithm that is intended to reduce its generalization error but not its training error. regularization is one of the central concerns of the field of machine learning, rivaled in its importance only by optimization. the no free lunch theorem has made it clear that there is no best machine
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
135
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
its generalization error but not its training error. regularization is one of the central concerns of the field of machine learning, rivaled in its importance only by optimization. the no free lunch theorem has made it clear that there is no best machine learning algorithm, and, in particular, no best form of regularization. instead we must choose a form of regularization that is well - suited to the particular task we want to solve. the philosophy of deep learning in general and this book in particular is that a very wide range of tasks ( such as all of the intellectual tasks that people can do ) may all be solved [UNK] using very general - purpose forms of regularization. 5. 3 hyperparameters and validation sets most machine learning algorithms have several settings that we can use to control the behavior of the learning algorithm. these settings are called hyperparame - ters. the values of hyperparameters are not adapted by the learning algorithm itself ( though we can design a nested learning procedure where one learning algorithm learns the best hyperparameters for another learning algorithm ). in the polynomial regression example we saw in figure, there is a single 5. 2 hyperparameter : the degree of the polynomial, which acts as a capacity hyper - parameter
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
135
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
learns the best hyperparameters for another learning algorithm ). in the polynomial regression example we saw in figure, there is a single 5. 2 hyperparameter : the degree of the polynomial, which acts as a capacity hyper - parameter. the λ value used to control the strength of weight decay is another example of a hyperparameter. sometimes a setting is chosen to be a hyperparameter that the learning al - gorithm does not learn because it is [UNK] to optimize. more frequently, the 120
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
135
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics setting must be a hyperparameter because it is not appropriate to learn that hyperparameter on the training set. this applies to all hyperparameters that control model capacity. if learned on the training set, such hyperparameters would always choose the maximum possible model capacity, resulting in overfitting ( refer to figure ). for example, we can always fit the training set better with a higher 5. 3 degree polynomial and a weight decay setting of λ = 0 than we could with a lower degree polynomial and a positive weight decay setting. to solve this problem, we need a validation set of examples that the training algorithm does not observe. earlier we discussed how a held - out test set, composed of examples coming from the same distribution as the training set, can be used to estimate the generalization error of a learner, after the learning process has completed. it is important that the test examples are not used in any way to make choices about the model, including its hyperparameters. for this reason, no example from the test set can be used in the validation set. therefore, we always construct the validation set from the training data. specifically, we split the training data into two
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
136
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
its hyperparameters. for this reason, no example from the test set can be used in the validation set. therefore, we always construct the validation set from the training data. specifically, we split the training data into two disjoint subsets. one of these subsets is used to learn the parameters. the other subset is our validation set, used to estimate the generalization error during or after training, allowing for the hyperparameters to be updated accordingly. the subset of data used to learn the parameters is still typically called the training set, even though this may be confused with the larger pool of data used for the entire training process. the subset of data used to guide the selection of hyperparameters is called the validation set. typically, one uses about 80 % of the training data for training and 20 % for validation. since the validation set is used to “ train ” the hyperparameters, the validation set error will underestimate the generalization error, though typically by a smaller amount than the training error. after all hyperparameter optimization is complete, the generalization error may be estimated using the test set. in practice, when the same test set has been used repeatedly to evaluate performance of [UNK] algorithms over
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
136
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
typically by a smaller amount than the training error. after all hyperparameter optimization is complete, the generalization error may be estimated using the test set. in practice, when the same test set has been used repeatedly to evaluate performance of [UNK] algorithms over many years, and especially if we consider all the attempts from the scientific community at beating the reported state - of - the - art performance on that test set, we end up having optimistic evaluations with the test set as well. benchmarks can thus become stale and then do not reflect the true field performance of a trained system. thankfully, the community tends to move on to new ( and usually more ambitious and larger ) benchmark datasets. 121
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
136
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics 5. 3. 1 cross - validation dividing the dataset into a fixed training set and a fixed test set can be problematic if it results in the test set being small. a small test set implies statistical uncertainty around the estimated average test error, making it [UNK] to claim that algorithm a works better than algorithm on the given task. b when the dataset has hundreds of thousands of examples or more, this is not a serious issue. when the dataset is too small, are alternative procedures enable one to use all of the examples in the estimation of the mean test error, at the price of increased computational cost. these procedures are based on the idea of repeating the training and testing computation on [UNK] randomly chosen subsets or splits of the original dataset. the most common of these is the k - fold cross - validation procedure, shown in algorithm, in which a partition of the dataset is formed by 5. 1 splitting it into k non - overlapping subsets. the test error may then be estimated by taking the average test error across k trials. on trial i, the i - th subset of the data is used as the test set and the rest of the data is used as the training set. one problem is that there exist
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
137
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
then be estimated by taking the average test error across k trials. on trial i, the i - th subset of the data is used as the test set and the rest of the data is used as the training set. one problem is that there exist no unbiased estimators of the variance of such average error estimators ( bengio and grandvalet 2004, ), but approximations are typically used. 5. 4 estimators, bias and variance the field of statistics gives us many tools that can be used to achieve the machine learning goal of solving a task not only on the training set but also to generalize. foundational concepts such as parameter estimation, bias and variance are useful to formally characterize notions of generalization, underfitting and overfitting. 5. 4. 1 point estimation point estimation is the attempt to provide the single “ best ” prediction of some quantity of interest. in general the quantity of interest can be a single parameter or a vector of parameters in some parametric model, such as the weights in our linear regression example in section, but it can also be a whole function. 5. 1. 4 in order to distinguish estimates of parameters from their true value, our convention will be to denote a point estimate of
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
137
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
model, such as the weights in our linear regression example in section, but it can also be a whole function. 5. 1. 4 in order to distinguish estimates of parameters from their true value, our convention will be to denote a point estimate of a parameter by θ [UNK]. let { x ( 1 ),..., x ( ) m } be a set of m independent and identically distributed 122
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
137
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics algorithm 5. 1 the k - fold cross - validation algorithm. it can be used to estimate generalization error of a learning algorithm a when the given dataset d is too small for a simple train / test or train / valid split to yield accurate estimation of generalization error, because the mean of a loss l on a small test set may have too high variance. the dataset d contains as elements the abstract examples z ( ) i ( for the i - th example ), which could stand for an ( input, target ) pair z ( ) i = ( x ( ) i, y ( ) i ) in the case of supervised learning, or for just an input z ( ) i = x ( ) i in the case of unsupervised learning. the algorithm returns the vector of errors e for each example in d, whose mean is the estimated generalization error. the errors on individual examples can be used to compute a confidence interval around the mean ( equation ). while these confidence intervals are not well - justified after the 5. 47 use of cross - validation, it is still common practice to use them to declare that algorithm a is better than algorithm b only if the con
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
138
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
). while these confidence intervals are not well - justified after the 5. 47 use of cross - validation, it is still common practice to use them to declare that algorithm a is better than algorithm b only if the confidence interval of the error of algorithm a lies below and does not intersect the confidence interval of algorithm b. define kfoldxv ( ) : d, a, l, k require : d, the given dataset, with elements z ( ) i require : a, the learning algorithm, seen as a function that takes a dataset as input and outputs a learned function require : l, the loss function, seen as a function from a learned function f and an example z ( ) i ∈ ∈ d to a scalar r require : k, the number of folds split into mutually exclusive subsets d k di, whose union is. d for do i k from to 1 fi = ( a d d \ i ) for z ( ) j in di do ej = ( l fi, z ( ) j ) end for end for return e 123
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
138
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics ( i. i. d. ) data points. a or is any function of the data : point estimator statistic [UNK] = ( g x ( 1 ),..., x ( ) m ). ( 5. 19 ) the definition does not require that g return a value that is close to the true θ or even that the range of g is the same as the set of allowable values of θ. this definition of a point estimator is very general and allows the designer of an estimator great flexibility. while almost any function thus qualifies as an estimator, a good estimator is a function whose output is close to the true underlying θ that generated the training data. for now, we take the frequentist perspective on statistics. that is, we assume that the true parameter value θ is fixed but unknown, while the point estimate [UNK] is a function of the data. since the data is drawn from a random process, any function of the data is random. therefore [UNK] is a random variable. point estimation can also refer to the estimation of the relationship between input and target variables. we refer to these types of point estimates as function est
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
139
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
is drawn from a random process, any function of the data is random. therefore [UNK] is a random variable. point estimation can also refer to the estimation of the relationship between input and target variables. we refer to these types of point estimates as function estimators. function estimation as we mentioned above, sometimes we are interested in performing function estimation ( or function approximation ). here we are trying to predict a variable y given an input vector x. we assume that there is a function f ( x ) that describes the approximate relationship between y and x. for example, we may assume that y = f ( x ) +, where stands for the part of y that is not predictable from x. in function estimation, we are interested in approximating f with a model or estimate [UNK]. function estimation is really just the same as estimating a parameter θ ; the function estimator [UNK] is simply a point estimator in function space. the linear regression example ( discussed above in section ) and 5. 1. 4 the polynomial regression example ( discussed in section ) are both examples of 5. 2 scenarios that may be interpreted either as estimating a parameter w or estimating a function [UNK] y mapping from to x. we now review the most commonly studied properties of point estima
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
139
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
regression example ( discussed in section ) are both examples of 5. 2 scenarios that may be interpreted either as estimating a parameter w or estimating a function [UNK] y mapping from to x. we now review the most commonly studied properties of point estimators and discuss what they tell us about these estimators. 5. 4. 2 bias the bias of an estimator is defined as : bias ( [UNK] ) = ( e [UNK] ) −θ ( 5. 20 ) 124
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
139
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics where the expectation is over the data ( seen as samples from a random variable ) and θ is the true underlying value of θ used to define the data generating distri - bution. an estimator [UNK] is said to be unbiased if bias ( [UNK] ) = 0, which implies that e ( [UNK] ) = θ. an estimator [UNK] is said to be asymptotically unbiased if limm→∞bias ( [UNK] ) = 0, which implies that limm→∞e ( [UNK] m ) = θ. example : bernoulli distribution consider a set of samples { x ( 1 ),..., x ( ) m } that are independently and identically distributed according to a bernoulli distri - bution with mean : θ p x ( ( ) i ; ) = θ θx ( ) i ( 1 ) −θ ( 1−x ( ) i ). ( 5. 21 ) a common estimator for the θ parameter of this distribution is the mean of the training samples : [UNK] = 1 m m i = 1 x ( ) i. ( 5. 22 ) to determine whether this estimator is biased, we can substitute equation
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
140
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
estimator for the θ parameter of this distribution is the mean of the training samples : [UNK] = 1 m m i = 1 x ( ) i. ( 5. 22 ) to determine whether this estimator is biased, we can substitute equation 5. 22 into equation : 5. 20 bias ( [UNK] ) = [ e [UNK] ] −θ ( 5. 23 ) = e 1 m m i = 1 x ( ) i −θ ( 5. 24 ) = 1 m m i = 1 e x ( ) i −θ ( 5. 25 ) = 1 m m i = 1 1 x ( ) i = 0 x ( ) i θx ( ) i ( 1 ) −θ ( 1−x ( ) i ) −θ ( 5. 26 ) = 1 m m i = 1 ( ) θ −θ ( 5. 27 ) = = 0 θ θ − ( 5. 28 ) since bias ( [UNK] ) = 0, we say that our estimator [UNK] is unbiased. example : gaussian distribution estimator of the mean now, consider a set of samples { x ( 1 ),..., x ( ) m } that are independently and identically distributed according to a gaussian distribution p ( x
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
140
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
: gaussian distribution estimator of the mean now, consider a set of samples { x ( 1 ),..., x ( ) m } that are independently and identically distributed according to a gaussian distribution p ( x ( ) i ) = n ( x ( ) i ; µ, σ2 ), where i ∈ { 1,..., m }. 125
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
140
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics recall that the gaussian probability density function is given by p x ( ( ) i ; µ, σ 2 ) = 1 √ 2πσ2 exp −1 2 ( x ( ) i −µ ) 2 σ2. ( 5. 29 ) a common estimator of the gaussian mean parameter is known as the sample mean : [UNK] = 1 m m i = 1 x ( ) i ( 5. 30 ) to determine the bias of the sample mean, we are again interested in calculating its expectation : bias ( [UNK] ) = [ [UNK] e µm ] −µ ( 5. 31 ) = e 1 m m i = 1 x ( ) i −µ ( 5. 32 ) = 1 m m i = 1 e x ( ) i −µ ( 5. 33 ) = 1 m m i = 1 µ −µ ( 5. 34 ) = = 0 µ µ − ( 5. 35 ) thus we find that the sample mean is an unbiased estimator of gaussian mean parameter. example : estimators of the variance of a gaussian distribution as an example, we compare two [UNK] estimators of the variance parameter σ2 of a gaussian distribution.
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
141
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
##biased estimator of gaussian mean parameter. example : estimators of the variance of a gaussian distribution as an example, we compare two [UNK] estimators of the variance parameter σ2 of a gaussian distribution. we are interested in knowing if either estimator is biased. the first estimator of σ2 we consider is known as the sample variance : [UNK] m = 1 m m i = 1 x ( ) i [UNK] m 2, ( 5. 36 ) where [UNK] is the sample mean, defined above. more formally, we are interested in computing bias ( [UNK] 2 m ) = [ [UNK] e σ 2 m ] −σ2 ( 5. 37 ) 126
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
141
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics we begin by evaluating the term e [ [UNK] 2 m ] : e [ [UNK] m ] = e 1 m m i = 1 x ( ) i [UNK] 2 ( 5. 38 ) = m −1 m σ2 ( 5. 39 ) returning to equation, we conclude that the bias of 5. 37 [UNK] 2 m is −σ2 / m. therefore, the sample variance is a biased estimator. the unbiased sample variance estimator [UNK] m = 1 m −1 m i = 1 x ( ) i [UNK] 2 ( 5. 40 ) provides an alternative approach. as the name suggests this estimator is unbiased. that is, we find that e [ [UNK] m ] = σ2 : e [ [UNK] m ] = e 1 m −1 m i = 1 x ( ) i [UNK] 2 ( 5. 41 ) = m m −1e [ [UNK] 2 m ] ( 5. 42 ) = m m −1 m −1 m σ 2 ( 5. 43 ) = σ2. ( 5. 44 ) we have two estimators : one is biased and the other is not. while unbiased estimators are clearly desirable, they are not always the “ best
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
142
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
σ 2 ( 5. 43 ) = σ2. ( 5. 44 ) we have two estimators : one is biased and the other is not. while unbiased estimators are clearly desirable, they are not always the “ best ” estimators. as we will see we often use biased estimators that possess other important properties. 5. 4. 3 variance and standard error another property of the estimator that we might want to consider is how much we expect it to vary as a function of the data sample. just as we computed the expectation of the estimator to determine its bias, we can compute its variance. the variance of an estimator is simply the variance var ( [UNK] ) ( 5. 45 ) where the random variable is the training set. alternately, the square root of the variance is called the, denoted standard error se ( [UNK] ). 127
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
142
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics the variance or the standard error of an estimator provides a measure of how we would expect the estimate we compute from data to vary as we independently resample the dataset from the underlying data generating process. just as we might like an estimator to exhibit low bias we would also like it to have relatively low variance. when we compute any statistic using a finite number of samples, our estimate of the true underlying parameter is uncertain, in the sense that we could have obtained other samples from the same distribution and their statistics would have been [UNK]. the expected degree of variation in any estimator is a source of error that we want to quantify. the standard error of the mean is given by se ( [UNK] ) = var 1 m m i = 1 x ( ) i = σ √m, ( 5. 46 ) where σ2 is the true variance of the samples xi. the standard error is often estimated by using an estimate of σ. unfortunately, neither the square root of the sample variance nor the square root of the unbiased estimator of the variance provide an unbiased estimate of the standard deviation. both approaches tend to underestimate the true standard deviation, but are still used in practice
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
143
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
square root of the sample variance nor the square root of the unbiased estimator of the variance provide an unbiased estimate of the standard deviation. both approaches tend to underestimate the true standard deviation, but are still used in practice. the square root of the unbiased estimator of the variance is less of an underestimate. for large, the approximation is quite reasonable. m the standard error of the mean is very useful in machine learning experiments. we often estimate the generalization error by computing the sample mean of the error on the test set. the number of examples in the test set determines the accuracy of this estimate. taking advantage of the central limit theorem, which tells us that the mean will be approximately distributed with a normal distribution, we can use the standard error to compute the probability that the true expectation falls in any chosen interval. for example, the 95 % confidence interval centered on the mean [UNK] is ( [UNK] −1 96se ( [UNK]. µm ) [UNK], µm + 1 96se ( [UNK]. µm ) ), ( 5. 47 ) under the normal distribution with mean [UNK] and variance se ( [UNK] ) 2. in machine learning experiments, it is common to say that algorithma is better than
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
143
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
µm + 1 96se ( [UNK]. µm ) ), ( 5. 47 ) under the normal distribution with mean [UNK] and variance se ( [UNK] ) 2. in machine learning experiments, it is common to say that algorithma is better than algorithm b if the upper bound of the 95 % confidence interval for the error of algorithm a is less than the lower bound of the 95 % confidence interval for the error of algorithm b. 128
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
143
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics example : bernoulli distribution we once again consider a set of samples { x ( 1 ),..., x ( ) m } drawn independently and identically from a bernoulli distribution ( recall p ( x ( ) i ; θ ) = θ x ( ) i ( 1 −θ ) ( 1−x ( ) i ) ). this time we are interested in computing the variance of the estimator [UNK] = 1 m m i = 1 x ( ) i. var [UNK] = var 1 m m i = 1 x ( ) i ( 5. 48 ) = 1 m2 m i = 1 var x ( ) i ( 5. 49 ) = 1 m2 m i = 1 θ θ ( 1 − ) ( 5. 50 ) = 1 m2 mθ θ ( 1 − ) ( 5. 51 ) = 1 mθ θ ( 1 − ) ( 5. 52 ) the variance of the estimator decreases as a function of m, the number of examples in the dataset. this is a common property of popular estimators that we will return to when we discuss consistency ( see section ). 5. 4. 5 5. 4. 4 trading [UNK] and variance to minimize mean squared error bias and
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
144
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
in the dataset. this is a common property of popular estimators that we will return to when we discuss consistency ( see section ). 5. 4. 5 5. 4. 4 trading [UNK] and variance to minimize mean squared error bias and variance measure two [UNK] sources of error in an estimator. bias measures the expected deviation from the true value of the function or parameter. variance on the other hand, provides a measure of the deviation from the expected estimator value that any particular sampling of the data is likely to cause. what happens when we are given a choice between two estimators, one with more bias and one with more variance? how do we choose between them? for example, imagine that we are interested in approximating the function shown in figure and we are only [UNK] the choice between a model with large bias and 5. 2 one that [UNK] from large variance. how do we choose between them? the most common way to negotiate this trade - [UNK] to use cross - validation. empirically, cross - validation is highly successful on many real - world tasks. alter - natively, we can also compare the mean squared error ( mse ) of the estimates : mse = [ ( e [UNK] −θ ) 2 ] ( 5
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
144
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
##ly, cross - validation is highly successful on many real - world tasks. alter - natively, we can also compare the mean squared error ( mse ) of the estimates : mse = [ ( e [UNK] −θ ) 2 ] ( 5. 53 ) = bias ( [UNK] ) 2 + var ( [UNK] ) ( 5. 54 ) 129
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
144
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics the mse measures the overall expected deviation — in a squared error sense — between the estimator and the true value of the parameter θ. as is clear from equation, evaluating the mse incorporates both the bias and the variance. 5. 54 desirable estimators are those with small mse and these are estimators that manage to keep both their bias and variance somewhat in check. capacity bias generalization error variance optimal capacity overfitting zone underfitting zone figure 5. 6 : as capacity increases ( x - axis ), bias ( dotted ) tends to decrease and variance ( dashed ) tends to increase, yielding another u - shaped curve for generalization error ( bold curve ). if we vary capacity along one axis, there is an optimal capacity, with underfitting when the capacity is below this optimum and overfitting when it is above. this relationship is similar to the relationship between capacity, underfitting, and overfitting, discussed in section and figure. 5. 2 5. 3 the relationship between bias and variance is tightly linked to the machine learning concepts of capacity, underfitting and overfitting. in the case where gen - eralization error is measured by the mse ( where
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
145
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
##e. 5. 2 5. 3 the relationship between bias and variance is tightly linked to the machine learning concepts of capacity, underfitting and overfitting. in the case where gen - eralization error is measured by the mse ( where bias and variance are meaningful components of generalization error ), increasing capacity tends to increase variance and decrease bias. this is illustrated in figure, where we see again the u - shaped 5. 6 curve of generalization error as a function of capacity. 5. 4. 5 consistency so far we have discussed the properties of various estimators for a training set of fixed size. usually, we are also concerned with the behavior of an estimator as the amount of training data grows. in particular, we usually wish that, as the number of data points m in our dataset increases, our point estimates converge to the true 130
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
145
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics value of the corresponding parameters. more formally, we would like that [UNK] = θ. ( 5. 55 ) the symbol plim indicates convergence in probability, meaning that for any > 0, p ( | [UNK] − | θ > ) →0 as m →∞. the condition described by equation is 5. 55 known as consistency. it is sometimes referred to as weak consistency, with strong consistency referring to the almost sure convergence of [UNK] to θ. almost sure convergence of a sequence of random variables x ( 1 ), x ( 2 ),... to a value x occurs when p ( limm→∞x ( ) m = ) = 1 x. consistency ensures that the bias induced by the estimator diminishes as the number of data examples grows. however, the reverse is not true — asymptotic unbiasedness does not imply consistency. for example, consider estimating the mean parameter µ of a normal distribution n ( x ; µ, σ2 ), with a dataset consisting of m samples : { x ( 1 ),..., x ( ) m }. we could use the first sample x ( 1 ) of the dataset as an unbiased estimator
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
146
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
), with a dataset consisting of m samples : { x ( 1 ),..., x ( ) m }. we could use the first sample x ( 1 ) of the dataset as an unbiased estimator : [UNK] = x ( 1 ). in that case, e ( [UNK] m ) = θ so the estimator is unbiased no matter how many data points are seen. this, of course, implies that the estimate is asymptotically unbiased. however, this is not a consistent estimator as it is the case that not [UNK] → →∞ θ m as. 5. 5 maximum likelihood estimation previously, we have seen some definitions of common estimators and analyzed their properties. but where did these estimators come from? rather than guessing that some function might make a good estimator and then analyzing its bias and variance, we would like to have some principle from which we can derive specific functions that are good estimators for [UNK] models. the most common such principle is the maximum likelihood principle. consider a set of m examples x = { x ( 1 ),..., x ( ) m } drawn independently from the true but unknown
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
146
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
are good estimators for [UNK] models. the most common such principle is the maximum likelihood principle. consider a set of m examples x = { x ( 1 ),..., x ( ) m } drawn independently from the true but unknown data generating distribution pdata ( ) x. let pmodel ( x ; θ ) be a parametric family of probability distributions over the same space indexed by θ. in other words, pmodel ( x ; θ ) maps any configuration x to a real number estimating the true probability pdata ( ) x. the maximum likelihood estimator for is then defined as θ θml = arg max θ pmodel ( ; ) x θ ( 5. 56 ) = arg max θ m i = 1 pmodel ( x ( ) i ; ) θ ( 5. 57 ) 131
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
146
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics this product over many probabilities can be inconvenient for a variety of reasons. for example, it is prone to numerical underflow. to obtain a more convenient but equivalent optimization problem, we observe that taking the logarithm of the likelihood does not change its arg max but does conveniently transform a product into a sum : θml = arg max θ m i = 1 log pmodel ( x ( ) i ; ) θ. ( 5. 58 ) because the arg max does not change when we rescale the cost function, we can divide by m to obtain a version of the criterion that is expressed as an expectation with respect to the empirical distribution [UNK] defined by the training data : θml = arg max θ [UNK] log pmodel ( ; ) x θ. ( 5. 59 ) one way to interpret maximum likelihood estimation is to view it as minimizing the dissimilarity between the empirical distribution [UNK] defined by the training set and the model distribution, with the degree of dissimilarity between the two measured by the kl divergence. the kl divergence is given by dkl ( [UNK] ) = [UNK] [ log [UNK]
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
147
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
##fined by the training set and the model distribution, with the degree of dissimilarity between the two measured by the kl divergence. the kl divergence is given by dkl ( [UNK] ) = [UNK] [ log [UNK] ( ) log x − pmodel ( ) ] x. ( 5. 60 ) the term on the left is a function only of the data generating process, not the model. this means when we train the model to minimize the kl divergence, we need only minimize [UNK] [ log pmodel ( ) ] x ( 5. 61 ) which is of course the same as the maximization in equation. 5. 59 minimizing this kl divergence corresponds exactly to minimizing the cross - entropy between the distributions. many authors use the term “ cross - entropy ” to identify specifically the negative log - likelihood of a bernoulli or softmax distribution, but that is a misnomer. any loss consisting of a negative log - likelihood is a cross - entropy between the empirical distribution defined by the training set and the probability distribution defined by model. for example, mean squared error is the cross - entropy between the empirical distribution and a gaussian model.
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
147
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
- likelihood is a cross - entropy between the empirical distribution defined by the training set and the probability distribution defined by model. for example, mean squared error is the cross - entropy between the empirical distribution and a gaussian model. we can thus see maximum likelihood as an attempt to make the model dis - tribution match the empirical distribution [UNK]. ideally, we would like to match the true data generating distribution pdata, but we have no direct access to this distribution. while the optimal θ is the same regardless of whether we are maximizing the likelihood or minimizing the kl divergence, the values of the objective functions 132
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
147
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics are [UNK]. in software, we often phrase both as minimizing a cost function. maximum likelihood thus becomes minimization of the negative log - likelihood ( nll ), or equivalently, minimization of the cross entropy. the perspective of maximum likelihood as minimum kl divergence becomes helpful in this case because the kl divergence has a known minimum value of zero. the negative log - likelihood can actually become negative when is real - valued. x 5. 5. 1 conditional log - likelihood and mean squared error the maximum likelihood estimator can readily be generalized to the case where our goal is to estimate a conditional probability p ( y x | ; θ ) in order to predict y given x. this is actually the most common situation because it forms the basis for most supervised learning. if x represents all our inputs and y all our observed targets, then the conditional maximum likelihood estimator is θml = arg max θ p. ( ; ) y x | θ ( 5. 62 ) if the examples are assumed to be i. i. d., then this can be decomposed into θml = arg max θ m i = 1 log ( p y ( ) i | x ( ) i
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
148
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
( 5. 62 ) if the examples are assumed to be i. i. d., then this can be decomposed into θml = arg max θ m i = 1 log ( p y ( ) i | x ( ) i ; ) θ. ( 5. 63 ) example : linear regression as maximum likelihood linear regression, introduced earlier in section, may be justified as a maximum likelihood 5. 1. 4 procedure. previously, we motivated linear regression as an algorithm that learns to take an input x and produce an output value [UNK]. the mapping from x to [UNK] is chosen to minimize mean squared error, a criterion that we introduced more or less arbitrarily. we now revisit linear regression from the point of view of maximum likelihood estimation. instead of producing a single prediction [UNK], we now think of the model as producing a conditional distribution p ( y | x ). we can imagine that with an infinitely large training set, we might see several training examples with the same input value x but [UNK] values of y. the goal of the learning algorithm is now to fit the distribution p ( y | x ) to all of those [UNK] y values that are all compatible with x. to derive the same linear regression
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
148
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
the same input value x but [UNK] values of y. the goal of the learning algorithm is now to fit the distribution p ( y | x ) to all of those [UNK] y values that are all compatible with x. to derive the same linear regression algorithm we obtained before, we define p ( y | x ) = n ( y ; [UNK] ( x ; w ), σ2 ). the function [UNK] ( x ; w ) gives the prediction of the mean of the gaussian. in this example, we assume that the variance is fixed to some constant σ 2 chosen by the user. we will see that this choice of the functional form of p ( y | x ) causes the maximum likelihood estimation procedure to yield the same learning algorithm as we developed before. since the 133
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
148
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics examples are assumed to be i. i. d., the conditional log - likelihood ( equation ) is 5. 63 given by m i = 1 log ( p y ( ) i | x ( ) i ; ) θ ( 5. 64 ) = log −m σ −m 2 log ( 2 ) π − m i = 1 [UNK] ( ) i −y ( ) i 2 2σ2, ( 5. 65 ) where [UNK] ( ) i is the output of the linear regression on the i - th input x ( ) i and m is the number of the training examples. comparing the log - likelihood with the mean squared error, msetrain = 1 m m i = 1 | | [UNK] ( ) i −y ( ) i | | 2, ( 5. 66 ) we immediately see that maximizing the log - likelihood with respect to w yields the same estimate of the parameters w as does minimizing the mean squared error. the two criteria have [UNK] values but the same location of the optimum. this justifies the use of the mse as a maximum likelihood estimation procedure. as we will see, the maximum likelihood estimator has several desirable properties. 5. 5. 2 properties of maximum likelihood
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
149
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
same location of the optimum. this justifies the use of the mse as a maximum likelihood estimation procedure. as we will see, the maximum likelihood estimator has several desirable properties. 5. 5. 2 properties of maximum likelihood the main appeal of the maximum likelihood estimator is that it can be shown to be the best estimator asymptotically, as the number of examples m →∞, in terms of its rate of convergence as increases. m under appropriate conditions, the maximum likelihood estimator has the property of consistency ( see section above ), meaning that as the number 5. 4. 5 of training examples approaches infinity, the maximum likelihood estimate of a parameter converges to the true value of the parameter. these conditions are : • the true distribution pdata must lie within the model family pmodel ( · ; θ ). otherwise, no estimator can recover pdata. • the true distribution pdata must correspond to exactly one value of θ. other - wise, maximum likelihood can recover the correct pdata, but will not be able to determine which value of was used by the data generating processing. θ there are other inductive principles besides the maximum likelihood estima - tor, many of
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
149
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
other - wise, maximum likelihood can recover the correct pdata, but will not be able to determine which value of was used by the data generating processing. θ there are other inductive principles besides the maximum likelihood estima - tor, many of which share the property of being consistent estimators. however, 134
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
149
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics consistent estimators can [UNK] in their statistic [UNK], meaning that one consistent estimator may obtain lower generalization error for a fixed number of samples m, or equivalently, may require fewer examples to obtain a fixed level of generalization error. statistical [UNK] is typically studied in the parametric case ( like in linear regression ) where our goal is to estimate the value of a parameter ( and assuming it is possible to identify the true parameter ), not the value of a function. a way to measure how close we are to the true parameter is by the expected mean squared error, computing the squared [UNK] between the estimated and true parameter values, where the expectation is over m training samples from the data generating distribution. that parametric mean squared error decreases as m increases, and for m large, the cramer - rao lower bound (, ;, ) shows that no rao 1945 cramer 1946 consistent estimator has a lower mean squared error than the maximum likelihood estimator. for these reasons ( consistency and [UNK] ), maximum likelihood is often considered the preferred estimator to use for machine learning. when the number of examples is small enough to yield overfitting behavior, regularization strategies such as weight decay may be used to obtain
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
150
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
reasons ( consistency and [UNK] ), maximum likelihood is often considered the preferred estimator to use for machine learning. when the number of examples is small enough to yield overfitting behavior, regularization strategies such as weight decay may be used to obtain a biased version of maximum likelihood that has less variance when training data is limited. 5. 6 bayesian statistics so far we have discussed frequentist statistics and approaches based on estimat - ing a single value of θ, then making all predictions thereafter based on that one estimate. another approach is to consider all possible values of θ when making a prediction. the latter is the domain of bayesian statistics. as discussed in section, the frequentist perspective is that the true 5. 4. 1 parameter value θ is fixed but unknown, while the point estimate [UNK] is a random variable on account of it being a function of the dataset ( which is seen as random ). the bayesian perspective on statistics is quite [UNK]. the bayesian uses probability to reflect degrees of certainty of states of knowledge. the dataset is directly observed and so is not random. on the other hand, the true parameter θ is unknown or uncertain and thus is represented as a random variable. before observing the data, we represent our knowledge
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
150
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
of certainty of states of knowledge. the dataset is directly observed and so is not random. on the other hand, the true parameter θ is unknown or uncertain and thus is represented as a random variable. before observing the data, we represent our knowledge of θ using the prior probability distribution, p ( θ ) ( sometimes referred to as simply “ the prior ” ). generally, the machine learning practitioner selects a prior distribution that is quite broad ( i. e. with high entropy ) to reflect a high degree of uncertainty in the 135
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
150
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics value of θ before observing any data. for example, one might assume that a priori θ lies in some finite range or volume, with a uniform distribution. many priors instead reflect a preference for “ simpler ” solutions ( such as smaller magnitude [UNK], or a function that is closer to being constant ). now consider that we have a set of data samples { x ( 1 ),..., x ( ) m }. we can recover the [UNK] of data on our belief about θ by combining the data likelihood p x ( ( 1 ),..., x ( ) m | θ ) with the prior via bayes ’ rule : p x ( θ | ( 1 ),..., x ( ) m ) = p x ( ( 1 ),..., x ( ) m | θ θ ) ( p ) p x ( ( 1 ),..., x ( ) m ) ( 5. 67 ) in the scenarios where bayesian estimation is typically used, the prior begins as a relatively uniform or gaussian distribution with high entropy, and the observation of the data usually causes the posterior to lose entropy and concentrate around a few highly likely values of the parameters.
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
151
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
scenarios where bayesian estimation is typically used, the prior begins as a relatively uniform or gaussian distribution with high entropy, and the observation of the data usually causes the posterior to lose entropy and concentrate around a few highly likely values of the parameters. relative to maximum likelihood estimation, bayesian estimation [UNK] two important [UNK]. first, unlike the maximum likelihood approach that makes predictions using a point estimate of θ, the bayesian approach is to make predictions using a full distribution over θ. for example, after observing m examples, the predicted distribution over the next data sample, x ( + 1 ) m, is given by p x ( ( + 1 ) m | x ( 1 ),..., x ( ) m ) = p x ( ( + 1 ) m | | θ θ ) ( p x ( 1 ),..., x ( ) m ) d. θ ( 5. 68 ) here each value of θ with positive probability density contributes to the prediction of the next example, with the contribution weighted by the posterior density itself. after having observed { x ( 1 ),..., x ( ) m }, if we are still quite uncertain about the value of θ, then this uncertainty is incorporated directly into any predictions we might make.
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
151
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
by the posterior density itself. after having observed { x ( 1 ),..., x ( ) m }, if we are still quite uncertain about the value of θ, then this uncertainty is incorporated directly into any predictions we might make. in section, we discussed how the frequentist approach addresses the uncer - 5. 4 tainty in a given point estimate of θ by evaluating its variance. the variance of the estimator is an assessment of how the estimate might change with alternative samplings of the observed data. the bayesian answer to the question of how to deal with the uncertainty in the estimator is to simply integrate over it, which tends to protect well against overfitting. this integral is of course just an application of the laws of probability, making the bayesian approach simple to justify, while the frequentist machinery for constructing an estimator is based on the rather ad hoc decision to summarize all knowledge contained in the dataset with a single point estimate. the second important [UNK] between the bayesian approach to estimation and the maximum likelihood approach is due to the contribution of the bayesian 136
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
151
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics prior distribution. the prior has an influence by shifting probability mass density towards regions of the parameter space that are preferred. in practice, a priori the prior often expresses a preference for models that are simpler or more smooth. critics of the bayesian approach identify the prior as a source of subjective human judgment impacting the predictions. bayesian methods typically generalize much better when limited training data is available, but typically [UNK] from high computational cost when the number of training examples is large. example : bayesian linear regression here we consider the bayesian esti - mation approach to learning the linear regression parameters. in linear regression, we learn a linear mapping from an input vector x ∈rn to predict the value of a scalar. the prediction is parametrized by the vector y ∈r w ∈rn : [UNK] = wx. ( 5. 69 ) given a set of m training samples ( x ( ) train, y ( ) train ), we can express the prediction of over the entire training set as : y [UNK] ( ) train = x ( ) train w. ( 5. 70 ) expressed as a gaussian conditional distribution on y ( ) train, we have p ( y ( ) train | x ( )
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
152
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
over the entire training set as : y [UNK] ( ) train = x ( ) train w. ( 5. 70 ) expressed as a gaussian conditional distribution on y ( ) train, we have p ( y ( ) train | x ( ) train, w y ) = ( n ( ) train ; x ( ) train w i, ) ( 5. 71 ) [UNK] −1 2 ( y ( ) train −x ( ) train w ) ( y ( ) train −x ( ) train w ), ( 5. 72 ) where we follow the standard mse formulation in assuming that the gaussian variance on y is one. in what follows, to reduce the notational burden, we refer to ( x ( ) train, y ( ) train ) ( ) as simply x y,. to determine the posterior distribution over the model parameter vector w, we first need to specify a prior distribution. the prior should reflect our naive belief about the value of these parameters. while it is sometimes [UNK] or unnatural to express our prior beliefs in terms of the parameters of the model, in practice we typically assume a fairly broad distribution expressing a high degree of uncertainty about θ. for real - valued parameters it is common to use a gaussian as a
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
152
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
or unnatural to express our prior beliefs in terms of the parameters of the model, in practice we typically assume a fairly broad distribution expressing a high degree of uncertainty about θ. for real - valued parameters it is common to use a gaussian as a prior distribution : p ( ) = ( ; w n w µ0, λ0 ) exp [UNK] −1 2 ( w µ − 0 ) λ−1 0 ( w µ − 0 ), ( 5. 73 ) 137
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
152
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics where µ0 and λ0 are the prior distribution mean vector and covariance matrix respectively. 1 with the prior thus specified, we can now proceed in determining the posterior distribution over the model parameters. p, p, p ( w x | y ) [UNK] ( y x | w ) ( ) w ( 5. 74 ) [UNK] −1 2 ( ) y xw − ( ) y xw − exp −1 2 ( w µ − 0 ) λ−1 0 ( w µ − 0 ) ( 5. 75 ) [UNK] −1 2 −2yxw w + xxw w + λ−1 0 w µ −2 0 λ−1 0 w. ( 5. 76 ) we now define λm = xx + λ−1 0 −1 and µm = λm xy + λ−1 0 µ0. using these new variables, we find that the posterior may be rewritten as a gaussian distribution : p, ( w x | y ) exp [UNK] −1 2 ( w µ − m ) λ−1 m ( w µ − m ) + 1 2µ mλ−1 m µm ( 5. 77 ) [UNK] −1 2 ( w µ − m ) λ−1
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
153
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
y ) exp [UNK] −1 2 ( w µ − m ) λ−1 m ( w µ − m ) + 1 2µ mλ−1 m µm ( 5. 77 ) [UNK] −1 2 ( w µ − m ) λ−1 m ( w µ − m ). ( 5. 78 ) all terms that do not include the parameter vector w have been omitted ; they are implied by the fact that the distribution must be normalized to integrate to. 1 equation shows how to normalize a multivariate gaussian distribution. 3. 23 examining this posterior distribution allows us to gain some intuition for the [UNK] of bayesian inference. in most situations, we set µ0 to 0. if we set λ0 = 1 αi, then µm gives the same estimate of w as does frequentist linear regression with a weight decay penalty of αww. one [UNK] is that the bayesian estimate is undefined if α is set to zero — - we are not allowed to begin the bayesian learning process with an infinitely wide prior on w. the more important [UNK] is that the bayesian estimate provides a covariance matrix, showing how likely all the [UNK] values of are, rather than providing only the estimate w µ
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
153
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
##esian learning process with an infinitely wide prior on w. the more important [UNK] is that the bayesian estimate provides a covariance matrix, showing how likely all the [UNK] values of are, rather than providing only the estimate w µm. 5. 6. 1 maximum ( map ) estimation a posteriori while the most principled approach is to make predictions using the full bayesian posterior distribution over the parameter θ, it is still often desirable to have a 1 unless there is a reason to assume a particular covariance structure, we typically assume a diagonal covariance matrix λ0 = diag ( λ0 ). 138
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
153
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics single point estimate. one common reason for desiring a point estimate is that most operations involving the bayesian posterior for most interesting models are intractable, and a point estimate [UNK] a tractable approximation. rather than simply returning to the maximum likelihood estimate, we can still gain some of the benefit of the bayesian approach by allowing the prior to influence the choice of the point estimate. one rational way to do this is to choose the maximum a posteriori ( map ) point estimate. the map estimate chooses the point of maximal posterior probability ( or maximal probability density in the more common case of continuous ) : θ θmap = arg max θ p ( ) = arg max θ x | θ log ( ) + log ( ) p x θ | p θ. ( 5. 79 ) we recognize, above on the right hand side, log p ( x θ | ), i. e. the standard log - likelihood term, and, corresponding to the prior distribution. log ( ) p θ as an example, consider a linear regression model with a gaussian prior on the weights w. if this prior is given by n ( w ; 0, 1 λi2 ), then the log - prior
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
154
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
distribution. log ( ) p θ as an example, consider a linear regression model with a gaussian prior on the weights w. if this prior is given by n ( w ; 0, 1 λi2 ), then the log - prior term in equation is proportional to the familiar 5. 79 λww weight decay penalty, plus a term that does not depend on w and does not [UNK] the learning process. map bayesian inference with a gaussian prior on the weights thus corresponds to weight decay. as with full bayesian inference, map bayesian inference has the advantage of leveraging information that is brought by the prior and cannot be found in the training data. this additional information helps to reduce the variance in the map point estimate ( in comparison to the ml estimate ). however, it does so at the price of increased bias. many regularized estimation strategies, such as maximum likelihood learning regularized with weight decay, can be interpreted as making the map approxima - tion to bayesian inference. this view applies when the regularization consists of adding an extra term to the objective function that corresponds to log p ( θ ). not all regularization penalties correspond to map bayesian inference. for example, some regularizer terms may not be the logari
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
154
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
view applies when the regularization consists of adding an extra term to the objective function that corresponds to log p ( θ ). not all regularization penalties correspond to map bayesian inference. for example, some regularizer terms may not be the logarithm of a probability distribution. other regularization terms depend on the data, which of course a prior probability distribution is not allowed to do. map bayesian inference provides a straightforward way to design complicated yet interpretable regularization terms. for example, a more complicated penalty term can be derived by using a mixture of gaussians, rather than a single gaussian distribution, as the prior ( nowlan and hinton 1992, ). 139
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
154
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics 5. 7 supervised learning algorithms recall from section that supervised learning algorithms are, roughly speaking, 5. 1. 3 learning algorithms that learn to associate some input with some output, given a training set of examples of inputs x and outputs y. in many cases the outputs y may be [UNK] to collect automatically and must be provided by a human “ supervisor, ” but the term still applies even when the training set targets were collected automatically. 5. 7. 1 probabilistic supervised learning most supervised learning algorithms in this book are based on estimating a probability distribution p ( y | x ). we can do this simply by using maximum likelihood estimation to find the best parameter vector θ for a parametric family of distributions. p y ( | x θ ; ) we have already seen that linear regression corresponds to the family p y y ( | n x θ ; ) = ( ; θx i, ). ( 5. 80 ) we can generalize linear regression to the classification scenario by defining a [UNK] family of probability distributions. if we have two classes, class 0 and class 1, then we need only specify the probability of one of these classes. the probability of class 1 determines the probability of class 0, because
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
155
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
scenario by defining a [UNK] family of probability distributions. if we have two classes, class 0 and class 1, then we need only specify the probability of one of these classes. the probability of class 1 determines the probability of class 0, because these two values must add up to 1. the normal distribution over real - valued numbers that we used for linear regression is parametrized in terms of a mean. any value we supply for this mean is valid. a distribution over a binary variable is slightly more complicated, because its mean must always be between 0 and 1. one way to solve this problem is to use the logistic sigmoid function to squash the output of the linear function into the interval ( 0, 1 ) and interpret that value as a probability : p y σ ( = 1 ; ) = | x θ ( θx ). ( 5. 81 ) this approach is known as logistic regression ( a somewhat strange name since we use the model for classification rather than regression ). in the case of linear regression, we were able to find the optimal weights by solving the normal equations. logistic regression is somewhat more [UNK]. there is no closed - form solution for its optimal weights. instead, we must search for them by maxim
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
155
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
the case of linear regression, we were able to find the optimal weights by solving the normal equations. logistic regression is somewhat more [UNK]. there is no closed - form solution for its optimal weights. instead, we must search for them by maximizing the log - likelihood. we can do this by minimizing the negative log - likelihood ( nll ) using gradient descent. 140
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
155
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics this same strategy can be applied to essentially any supervised learning problem, by writing down a parametric family of conditional probability distributions over the right kind of input and output variables. 5. 7. 2 support vector machines one of the most influential approaches to supervised learning is the support vector machine (, ; boser et al. 1992 cortes and vapnik 1995, ). this model is similar to logistic regression in that it is driven by a linear function wx + b. unlike logistic regression, the support vector machine does not provide probabilities, but only outputs a class identity. the svm predicts that the positive class is present when wx + b is positive. likewise, it predicts that the negative class is present when wx + b is negative. one key innovation associated with support vector machines is the kernel trick. the kernel trick consists of observing that many machine learning algorithms can be written exclusively in terms of dot products between examples. for example, it can be shown that the linear function used by the support vector machine can be re - written as wx + = + b b m i = 1 αixx ( ) i ( 5. 82 ) where x ( ) i is a training example
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
156
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
can be shown that the linear function used by the support vector machine can be re - written as wx + = + b b m i = 1 αixx ( ) i ( 5. 82 ) where x ( ) i is a training example and α is a vector of [UNK]. rewriting the learning algorithm this way allows us to replace x by the output of a given feature function φ ( x ) and the dot product with a function k ( x x, ( ) i ) = φ ( x ) · φ ( x ( ) i ) called a kernel. the · operator represents an inner product analogous to φ ( x ) φ ( x ( ) i ). for some feature spaces, we may not use literally the vector inner product. in some infinite dimensional spaces, we need to use other kinds of inner products, for example, inner products based on integration rather than summation. a complete development of these kinds of inner products is beyond the scope of this book. after replacing dot products with kernel evaluations, we can make predictions using the function f b ( ) = x + i α ik, ( x x ( ) i ). ( 5. 83 ) this function is nonlinear with respect to x, but the relationship between φ ( x
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
156
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
evaluations, we can make predictions using the function f b ( ) = x + i α ik, ( x x ( ) i ). ( 5. 83 ) this function is nonlinear with respect to x, but the relationship between φ ( x ) and f ( x ) is linear. also, the relationship between α and f ( x ) is linear. the kernel - based function is exactly equivalent to preprocessing the data by applying φ ( ) x to all inputs, then learning a linear model in the new transformed space. the kernel trick is powerful for two reasons. first, it allows us to learn models that are nonlinear as a function of x using convex optimization techniques that are 141
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
156
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 5. machine learning basics guaranteed to converge [UNK]. this is possible because we consider φ fixed and optimize only α, i. e., the optimization algorithm can view the decision function as being linear in a [UNK] space. second, the kernel function k often admits an implementation that is significantly more computational [UNK] than naively constructing two vectors and explicitly taking their dot product. φ ( ) x in some cases, φ ( x ) can even be infinite dimensional, which would result in an infinite computational cost for the naive, explicit approach. in many cases, k ( x x, ) is a nonlinear, tractable function of x even when φ ( x ) is intractable. as an example of an infinite - dimensional feature space with a tractable kernel, we construct a feature mapping φ ( x ) over the non - negative integers x. suppose that this mapping returns a vector containing x ones followed by infinitely many zeros. we can write a kernel function k ( x, x ( ) i ) = min ( x, x ( ) i ) that is exactly equivalent to the corresponding infinite - dimensional dot product. the most commonly used kernel is the gauss
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
157
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0