text
stringlengths
35
1.54k
source
stringclasses
1 value
page
int64
1
800
book
stringclasses
1 value
chunk_index
int64
0
0
chapter 1. introduction et al., ). 2012 at the same time that the scale and accuracy of deep networks has increased, so has the complexity of the tasks that they can solve. ( ) goodfellow et al. 2014d showed that neural networks could learn to output an entire sequence of characters transcribed from an image, rather than just identifying a single object. previously, it was widely believed that this kind of learning required labeling of the individual elements of the sequence (, ). recurrent neural networks, gulcehre and bengio 2013 such as the lstm sequence model mentioned above, are now used to model relationships between sequences sequences and other rather than just fixed inputs. this sequence - to - sequence learning seems to be on the cusp of revolutionizing another application : machine translation ( sutskever 2014 bahdanau et al., ; et al., 2015 ). this trend of increasing complexity has been pushed to its logical conclusion with the introduction of neural turing machines ( graves 2014a et al., ) that learn to read from memory cells and write arbitrary content to memory cells. such neural networks can learn simple programs from examples of desired behavior. for example, they can learn to sort lists of numbers given examples of scrambled and sorted sequences
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
40
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
al., ) that learn to read from memory cells and write arbitrary content to memory cells. such neural networks can learn simple programs from examples of desired behavior. for example, they can learn to sort lists of numbers given examples of scrambled and sorted sequences. this self - programming technology is in its infancy, but in the future could in principle be applied to nearly any task. another crowning achievement of deep learning is its extension to the domain of reinforcement learning. in the context of reinforcement learning, an autonomous agent must learn to perform a task by trial and error, without any guidance from the human operator. deepmind demonstrated that a reinforcement learning system based on deep learning is capable of learning to play atari video games, reaching human - level performance on many tasks (, ). deep learning has mnih et al. 2015 also significantly improved the performance of reinforcement learning for robotics (, ). finn et al. 2015 many of these applications of deep learning are highly profitable. deep learning is now used by many top technology companies including google, microsoft, facebook, ibm, baidu, apple, adobe, netflix, nvidia and nec. advances in deep learning have also depended heavily on advances in software infrastructure. software libraries such as theano
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
40
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
many top technology companies including google, microsoft, facebook, ibm, baidu, apple, adobe, netflix, nvidia and nec. advances in deep learning have also depended heavily on advances in software infrastructure. software libraries such as theano (, ; bergstra et al. 2010 bastien et al. et al., ), pylearn2 ( 2012 goodfellow, ), torch (, ), 2013c collobert et al. 2011b distbelief (, ), [UNK] (, ), mxnet (, ), and dean et al. 2012 jia 2013 chen et al. 2015 tensorflow (, ) have all supported important research projects or abadi et al. 2015 commercial products. deep learning has also made contributions back to other sciences. modern convolutional networks for object recognition provide a model of visual processing 25
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
40
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 1. introduction that neuroscientists can study (, ). deep learning also provides useful dicarlo 2013 tools for processing massive amounts of data and making useful predictions in scientific fields. it has been successfully used to predict how molecules will interact in order to help pharmaceutical companies design new drugs (, ), dahl et al. 2014 to search for subatomic particles (, ), and to automatically parse baldi et al. 2014 microscope images used to construct a 3 - d map of the human brain ( knowles - barley 2014 et al., ). we expect deep learning to appear in more and more scientific fields in the future. in summary, deep learning is an approach to machine learning that has drawn heavily on our knowledge of the human brain, statistics and applied math as it developed over the past several decades. in recent years, it has seen tremendous growth in its popularity and usefulness, due in large part to more powerful com - puters, larger datasets and techniques to train deeper networks. the years ahead are full of challenges and opportunities to improve deep learning even further and bring it to new frontiers. 26
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
41
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 1. introduction 1950 1985 2000 2015 2056 10−2 10−1 100 101 102 103 104 105 106 107 108 109 1010 1011 number of neurons ( logarithmic scale ) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 sponge roundworm leech ant bee frog octopus human figure 1. 11 : since the introduction of hidden units, artificial neural networks have doubled in size roughly every 2. 4 years. biological neural network sizes from ( ). wikipedia 2015 1. perceptron (,, ) rosenblatt 1958 1962 2. adaptive linear element (, ) widrow and [UNK] 3. neocognitron ( fukushima 1980, ) 4. early back - propagation network (, ) rumelhart et al. 1986b 5. recurrent neural network for speech recognition ( robinson and fallside 1991, ) 6. multilayer perceptron for speech recognition (, ) bengio et al. 1991 7. mean field sigmoid belief network (, ) saul et al. 1996 8. lenet - 5 (, ) lecun et al. 1998b 9. echo state network (, ) jaeger and haas 2004 10. deep belief network
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
42
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
field sigmoid belief network (, ) saul et al. 1996 8. lenet - 5 (, ) lecun et al. 1998b 9. echo state network (, ) jaeger and haas 2004 10. deep belief network (, ) hinton et al. 2006 11. gpu - accelerated convolutional network (, ) chellapilla et al. 2006 12. deep boltzmann machine ( salakhutdinov and hinton 2009a, ) 13. gpu - accelerated deep belief network (, ) raina et al. 2009 14. unsupervised convolutional network (, ) jarrett et al. 2009 15. gpu - accelerated multilayer perceptron (, ) ciresan et al. 2010 16. omp - 1 network (, ) coates and ng 2011 17. distributed autoencoder (, ) le et al. 2012 18. multi - gpu convolutional network (, ) krizhevsky et al. 2012 19. cots hpc unsupervised convolutional network (, ) coates et al. 2013 20. googlenet (, ) szegedy et al.
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
42
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
, ) krizhevsky et al. 2012 19. cots hpc unsupervised convolutional network (, ) coates et al. 2013 20. googlenet (, ) szegedy et al. 2014a 27
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
42
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 1. introduction 2010 2011 2012 2013 2014 2015 0 00. 0 05. 0 10. 0 15. 0 20. 0 25. 0 30. ilsvrc classification error rate figure 1. 12 : since deep networks reached the scale necessary to compete in the imagenet large scale visual recognition challenge, they have consistently won the competition every year, and yielded lower and lower error rates each time. data from russakovsky et al. et al. ( ) and 2014b he ( ). 2015 28
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
43
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
part i applied math and machine learning basics 29
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
44
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
this part of the book introduces the basic mathematical concepts needed to understand deep learning. we begin with general ideas from applied math that allow us to define functions of many variables, find the highest and lowest points on these functions and quantify degrees of belief. next, we describe the fundamental goals of machine learning. we describe how to accomplish these goals by specifying a model that represents certain beliefs, designing a cost function that measures how well those beliefs correspond with reality and using a training algorithm to minimize that cost function. this elementary framework is the basis for a broad variety of machine learning algorithms, including approaches to machine learning that are not deep. in the subsequent parts of the book, we develop deep learning algorithms within this framework. 30
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
45
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 2 linear algebra linear algebra is a branch of mathematics that is widely used throughout science and engineering. however, because linear algebra is a form of continuous rather than discrete mathematics, many computer scientists have little experience with it. a good understanding of linear algebra is essential for understanding and working with many machine learning algorithms, especially deep learning algorithms. we therefore precede our introduction to deep learning with a focused presentation of the key linear algebra prerequisites. if you are already familiar with linear algebra, feel free to skip this chapter. if you have previous experience with these concepts but need a detailed reference sheet to review key formulas, we recommend the matrix cookbook ( petersen and pedersen 2006, ). if you have no exposure at all to linear algebra, this chapter will teach you enough to read this book, but we highly recommend that you also consult another resource focused exclusively on teaching linear algebra, such as shilov 1977 ( ). this chapter will completely omit many important linear algebra topics that are not essential for understanding deep learning. 2. 1 scalars, vectors, matrices and tensors the study of linear algebra involves several types of mathematical objects : • scalars : a scalar is just a single number, in contrast to most of the other objects studied in linear algebra,
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
46
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
2. 1 scalars, vectors, matrices and tensors the study of linear algebra involves several types of mathematical objects : • scalars : a scalar is just a single number, in contrast to most of the other objects studied in linear algebra, which are usually arrays of multiple numbers. we write scalars in italics. we usually give scalars lower - case variable names. when we introduce them, we specify what kind of number they are. for 31
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
46
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 2. linear algebra example, we might say “ let s ∈r be the slope of the line, ” while defining a real - valued scalar, or “ let n ∈n be the number of units, ” while defining a natural number scalar. • vectors : a vector is an array of numbers. the numbers are arranged in order. we can identify each individual number by its index in that ordering. typically we give vectors lower case names written in bold typeface, such as x. the elements of the vector are identified by writing its name in italic typeface, with a subscript. the first element of x is x1, the second element is x2 and so on. we also need to say what kind of numbers are stored in the vector. if each element is in r, and the vector has n elements, then the vector lies in the set formed by taking the cartesian product of r n times, denoted as rn. when we need to explicitly identify the elements of a vector, we write them as a column enclosed in square brackets : x = x1 x2... xn. ( 2. 1 ) we can think of vectors as identifying points in space, with each
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
47
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
to explicitly identify the elements of a vector, we write them as a column enclosed in square brackets : x = x1 x2... xn. ( 2. 1 ) we can think of vectors as identifying points in space, with each element giving the coordinate along a [UNK] axis. sometimes we need to index a set of elements of a vector. in this case, we define a set containing the indices and write the set as a subscript. for example, to access x1, x3 and x6, we define the set s = { 1, 3, 6 } and write xs. we use the −sign to index the complement of a set. for example x−1 is the vector containing all elements of x except for x1, and x−s is the vector containing all of the elements of except for x x1, x3 and x6. • matrices : a matrix is a 2 - d array of numbers, so each element is identified by two indices instead of just one. we usually give matrices upper - case variable names with bold typeface, such as a. if a real - valued matrix a has a height of m and a width of n, then we say that a
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
47
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
##ed by two indices instead of just one. we usually give matrices upper - case variable names with bold typeface, such as a. if a real - valued matrix a has a height of m and a width of n, then we say that a ∈rm n ×. we usually identify the elements of a matrix using its name in italic but not bold font, and the indices are listed with separating commas. for example, a1 1, is the upper left entry of a and am, n is the bottom right entry. we can identify all of the numbers with vertical coordinate i by writing a “ ” for the horizontal : coordinate. for example, ai, : denotes the horizontal cross section of a with vertical coordinate i. this is known as the i - th row of a. likewise, a :, i is 32
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
47
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 2. linear algebra a = a1 1, a1 2, a2 1, a2 2, a3 1, a3 2, ⇒a = a1 1, a2 1, a3 1, a1 2, a2 2, a3 2, figure 2. 1 : the transpose of the matrix can be thought of as a mirror image across the main diagonal. the - th of. when we need to explicitly identify the elements of i column a a matrix, we write them as an array enclosed in square brackets : a1 1, a1 2, a2 1, a2 2,. ( 2. 2 ) sometimes we may need to index matrix - valued expressions that are not just a single letter. in this case, we use subscripts after the expression, but do not convert anything to lower case. for example, f ( a ) i, j gives element ( i, j ) of the matrix computed by applying the function to. f a • tensors : in some cases we will need an array with more than two axes. in the general case, an array of numbers arranged on a regular grid with a variable number of axes is known as a tensor. we denote a tensor named “ a ” with this typeface : a. we identify the element of
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
48
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
than two axes. in the general case, an array of numbers arranged on a regular grid with a variable number of axes is known as a tensor. we denote a tensor named “ a ” with this typeface : a. we identify the element of a at coordinates ( i, j, k ) by writing ai, j, k. one important operation on matrices is the transpose. the transpose of a matrix is the mirror image of the matrix across a diagonal line, called the main diagonal, running down and to the right, starting from its upper left corner. see figure for a graphical depiction of this operation. we denote the transpose of a 2. 1 matrix as a a, and it is defined such that ( a ) i, j = aj, i. ( 2. 3 ) vectors can be thought of as matrices that contain only one column. the transpose of a vector is therefore a matrix with only one row. sometimes we 33
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
48
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 2. linear algebra define a vector by writing out its elements in the text inline as a row matrix, then using the transpose operator to turn it into a standard column vector, e. g., x = [ x1, x2, x3 ]. a scalar can be thought of as a matrix with only a single entry. from this, we can see that a scalar is its own transpose : a a =. we can add matrices to each other, as long as they have the same shape, just by adding their corresponding elements : where c a b = + ci, j = ai, j + b i, j. we can also add a scalar to a matrix or multiply a matrix by a scalar, just by performing that operation on each element of a matrix : d = a · b + c where di, j = a b · i, j + c. in the context of deep learning, we also use some less conventional notation. we allow the addition of matrix and a vector, yielding another matrix : c = a + b, where ci, j = ai, j + bj. in other words, the vector b is added to each row of the matrix. this shorthand eliminate
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
49
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
the addition of matrix and a vector, yielding another matrix : c = a + b, where ci, j = ai, j + bj. in other words, the vector b is added to each row of the matrix. this shorthand eliminates the need to define a matrix with b copied into each row before doing the addition. this implicit copying of b to many locations is called. broadcasting 2. 2 multiplying matrices and vectors one of the most important operations involving matrices is multiplication of two matrices. the matrix product of matrices a and b is a third matrix c. in order for this product to be defined, a must have the same number of columns as b has rows. if a is of shape m n × and b is of shape n p ×, then c is of shape m p ×. we can write the matrix product just by placing two or more matrices together, e. g. c ab =. ( 2. 4 ) the product operation is defined by ci, j = k ai, kbk, j. ( 2. 5 ) note that the standard product of two matrices is just a matrix containing not the product of the individual elements. such an operation exists and is called the element - wise product had
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
49
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
, j = k ai, kbk, j. ( 2. 5 ) note that the standard product of two matrices is just a matrix containing not the product of the individual elements. such an operation exists and is called the element - wise product hadamard product or, and is denoted as. a b the dot product between two vectors x and y of the same dimensionality is the matrix product xy. we can think of the matrix product c = ab as computing ci, j as the dot product between row of and column of. i a j b 34
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
49
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 2. linear algebra matrix product operations have many useful properties that make mathematical analysis of matrices more convenient. for example, matrix multiplication is distributive : a b c ab ac ( + ) = +. ( 2. 6 ) it is also associative : a bc ab c ( ) = ( ). ( 2. 7 ) matrix multiplication is commutative ( the condition not ab = ba does not always hold ), unlike scalar multiplication. however, the dot product between two vectors is commutative : xy y = x. ( 2. 8 ) the transpose of a matrix product has a simple form : ( ) ab = ba. ( 2. 9 ) this allows us to demonstrate equation, by exploiting the fact that the value 2. 8 of such a product is a scalar and therefore equal to its own transpose : xy = xy = yx. ( 2. 10 ) since the focus of this textbook is not linear algebra, we do not attempt to develop a comprehensive list of useful properties of the matrix product here, but the reader should be aware that many more exist. we now know enough linear algebra notation to write down a system of linear equations : ax b = ( 2. 11 )
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
50
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
attempt to develop a comprehensive list of useful properties of the matrix product here, but the reader should be aware that many more exist. we now know enough linear algebra notation to write down a system of linear equations : ax b = ( 2. 11 ) where a ∈rm n × is a known matrix, b ∈rm is a known vector, and x ∈rn is a vector of unknown variables we would like to solve for. each element xi of x is one of these unknown variables. each row of a and each element of b provide another constraint. we can rewrite equation as : 2. 11 a1 :, x = b1 ( 2. 12 ) a2 :, x = b2 ( 2. 13 )... ( 2. 14 ) am, : x = bm ( 2. 15 ) or, even more explicitly, as : a1 1, x1 + a1 2, x2 + + · · · a1, nxn = b1 ( 2. 16 ) 35
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
50
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 2. linear algebra 1 0 0 0 1 0 0 0 1 figure 2. 2 : example identity matrix : this is i3. a2 1, x1 + a2 2, x2 + + · · · a2, nxn = b2 ( 2. 17 )... ( 2. 18 ) am, 1x1 + am, 2x2 + + · · · a m, nxn = bm. ( 2. 19 ) matrix - vector product notation provides a more compact representation for equations of this form. 2. 3 identity and inverse matrices linear algebra [UNK] a powerful tool called matrix inversion that allows us to analytically solve equation for many values of. 2. 11 a to describe matrix inversion, we first need to define the concept of an identity matrix. an identity matrix is a matrix that does not change any vector when we multiply that vector by that matrix. we denote the identity matrix that preserves n - dimensional vectors as in. formally, in ∈rn n ×, and [UNK] ∈ x rn, inx x =. ( 2. 20 ) the structure of the identity matrix is simple : all of the entries along the main diagonal are 1, while all of the other entries are zero. see fi
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
51
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
n ×, and [UNK] ∈ x rn, inx x =. ( 2. 20 ) the structure of the identity matrix is simple : all of the entries along the main diagonal are 1, while all of the other entries are zero. see figure for an example. 2. 2 the matrix inverse of a is denoted as a−1, and it is defined as the matrix such that a−1a i = n. ( 2. 21 ) we can now solve equation by the following steps : 2. 11 ax b = ( 2. 22 ) a−1ax a = −1b ( 2. 23 ) inx a = −1b ( 2. 24 ) 36
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
51
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 2. linear algebra x a = −1b. ( 2. 25 ) of course, this process depends on it being possible to find a−1. we discuss the conditions for the existence of a−1 in the following section. when a−1 exists, several [UNK] algorithms exist for finding it in closed form. in theory, the same inverse matrix can then be used to solve the equation many times for [UNK] values of b. however, a −1 is primarily useful as a theoretical tool, and should not actually be used in practice for most software applications. because a−1 can be represented with only limited precision on a digital computer, algorithms that make use of the value of b can usually obtain more accurate estimates of. x 2. 4 linear dependence and span in order for a−1 to exist, equation must have exactly one solution for every 2. 11 value of b. however, it is also possible for the system of equations to have no solutions or infinitely many solutions for some values of b. it is not possible to have more than one but less than infinitely many solutions for a particular b ; if both and are solutions then x y z x y = α + ( 1 ) −α ( 2. 26 ) is also
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
52
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
it is not possible to have more than one but less than infinitely many solutions for a particular b ; if both and are solutions then x y z x y = α + ( 1 ) −α ( 2. 26 ) is also a solution for any real. α to analyze how many solutions the equation has, we can think of the columns of a as specifying [UNK] directions we can travel from the origin ( the point specified by the vector of all zeros ), and determine how many ways there are of reaching b. in this view, each element of x specifies how far we should travel in each of these directions, with xi specifying how far to move in the direction of column : i ax = i xia :, i. ( 2. 27 ) in general, this kind of operation is called a linear combination. formally, a linear combination of some set of vectors { v ( 1 ),..., v ( ) n } is given by multiplying each vector v ( ) i by a corresponding scalar [UNK] and adding the results : i civ ( ) i. ( 2. 28 ) the span of a set of vectors is the set of all points obtainable by linear combination of
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
52
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
##plying each vector v ( ) i by a corresponding scalar [UNK] and adding the results : i civ ( ) i. ( 2. 28 ) the span of a set of vectors is the set of all points obtainable by linear combination of the original vectors. 37
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
52
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 2. linear algebra determining whether ax = b has a solution thus amounts to testing whether b is in the span of the columns of a. this particular span is known as the column space range or the of. a in order for the system ax = b to have a solution for all values of b ∈rm, we therefore require that the column space of a be all of rm. if any point in r m is excluded from the column space, that point is a potential value of b that has no solution. the requirement that the column space of a be all of rm implies immediately that a must have at least m columns, i. e., n m ≥. otherwise, the dimensionality of the column space would be less than m. for example, consider a 3 × 2 matrix. the target b is 3 - d, but x is only 2 - d, so modifying the value of x at best allows us to trace out a 2 - d plane within r3. the equation has a solution if and only if lies on that plane. b having n m ≥ is only a necessary condition for every point to have a solution. it is not a [UNK] condition, because it is possible for some of the columns to be redundant. consider a 2 ×2
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
53
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
only if lies on that plane. b having n m ≥ is only a necessary condition for every point to have a solution. it is not a [UNK] condition, because it is possible for some of the columns to be redundant. consider a 2 ×2 matrix where both of the columns are identical. this has the same column space as a 2 × 1 matrix containing only one copy of the replicated column. in other words, the column space is still just a line, and fails to encompass all of r2, even though there are two columns. formally, this kind of redundancy is known as linear dependence. a set of vectors is linearly independent if no vector in the set is a linear combination of the other vectors. if we add a vector to a set that is a linear combination of the other vectors in the set, the new vector does not add any points to the set ’ s span. this means that for the column space of the matrix to encompass all of rm, the matrix must contain at least one set of m linearly independent columns. this condition is both necessary and [UNK] for equation to have a solution for 2. 11 every value of b. note that the requirement is for a set to have exactly m linear independent columns, not at least m
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
53
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
set of m linearly independent columns. this condition is both necessary and [UNK] for equation to have a solution for 2. 11 every value of b. note that the requirement is for a set to have exactly m linear independent columns, not at least m. no set of m - dimensional vectors can have more than m mutually linearly independent columns, but a matrix with more than m columns may have more than one such set. in order for the matrix to have an inverse, we additionally need to ensure that equation has one solution for each value of 2. 11 at most b. to do so, we need to ensure that the matrix has at most m columns. otherwise there is more than one way of parametrizing each solution. together, this means that the matrix must be square, that is, we require that m = n and that all of the columns must be linearly independent. a square matrix with linearly dependent columns is known as. singular if a is not square or is square but singular, it can still be possible to solve the equation. however, we can not use the method of matrix inversion to find the 38
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
53
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 2. linear algebra solution. so far we have discussed matrix inverses as being multiplied on the left. it is also possible to define an inverse that is multiplied on the right : aa−1 = i. ( 2. 29 ) for square matrices, the left inverse and right inverse are equal. 2. 5 norms sometimes we need to measure the size of a vector. in machine learning, we usually measure the size of vectors using a function called a norm. formally, the lp norm is given by | | | | x p = i | xi | p 1 p ( 2. 30 ) for p, p. ∈r ≥1 norms, including the lp norm, are functions mapping vectors to non - negative values. on an intuitive level, the norm of a vector x measures the distance from the origin to the point x. more rigorously, a norm is any function f that satisfies the following properties : • ⇒ f ( ) = 0 x x = 0 • ≤ f ( + ) x y f f ( ) + x ( ) y ( the triangle inequality ) • [UNK] ∈ | | α r, f α ( x ) = α f ( ) x the l2 norm, with p = 2, is known as the euclidean
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
54
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
x y f f ( ) + x ( ) y ( the triangle inequality ) • [UNK] ∈ | | α r, f α ( x ) = α f ( ) x the l2 norm, with p = 2, is known as the euclidean norm. it is simply the euclidean distance from the origin to the point identified by x. the l2 norm is used so frequently in machine learning that it is often denoted simply as | | | | x, with the subscript omitted. it is also common to measure the size of a vector using 2 the squared l2 norm, which can be calculated simply as xx. the squared l2 norm is more convenient to work with mathematically and computationally than the l2 norm itself. for example, the derivatives of the squared l2 norm with respect to each element of x each depend only on the corresponding element of x, while all of the derivatives of the l2 norm depend on the entire vector. in many contexts, the squared l2 norm may be undesirable because it increases very slowly near the origin. in several machine learning 39
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
54
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 2. linear algebra applications, it is important to discriminate between elements that are exactly zero and elements that are small but nonzero. in these cases, we turn to a function that grows at the same rate in all locations, but retains mathematical simplicity : the l1 norm. the l1 norm may be simplified to | | | | x 1 = i | xi |. ( 2. 31 ) the l1 norm is commonly used in machine learning when the [UNK] between zero and nonzero elements is very important. every time an element of x moves away from 0 by, the l1 norm increases by. we sometimes measure the size of the vector by counting its number of nonzero elements. some authors refer to this function as the “ l0 norm, ” but this is incorrect terminology. the number of non - zero entries in a vector is not a norm, because scaling the vector by α does not change the number of nonzero entries. the l1 norm is often used as a substitute for the number of nonzero entries. one other norm that commonly arises in machine learning is the l∞norm, also known as the max norm. this norm simplifies to the absolute value of the
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
55
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
often used as a substitute for the number of nonzero entries. one other norm that commonly arises in machine learning is the l∞norm, also known as the max norm. this norm simplifies to the absolute value of the element with the largest magnitude in the vector, | | | | x ∞ = max i | xi |. ( 2. 32 ) sometimes we may also wish to measure the size of a matrix. in the context of deep learning, the most common way to do this is with the otherwise obscure frobenius norm : | | | | a f = i, j a2 i, j, ( 2. 33 ) which is analogous to the l2 norm of a vector. the dot product of two vectors can be rewritten in terms of norms. specifically, xy x = | | | | 2 | | | | y 2 cos θ ( 2. 34 ) where is the angle between and. θ x y 2. 6 special kinds of matrices and vectors some special kinds of matrices and vectors are particularly useful. diagonal matrices consist mostly of zeros and have non - zero entries only along the main diagonal. formally, a matrix d is diagonal if and only if di, j = 0 for
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
55
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
matrices and vectors some special kinds of matrices and vectors are particularly useful. diagonal matrices consist mostly of zeros and have non - zero entries only along the main diagonal. formally, a matrix d is diagonal if and only if di, j = 0 for 40
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
55
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 2. linear algebra all i = j. we have already seen one example of a diagonal matrix : the identity matrix, where all of the diagonal entries are 1. we write diag ( v ) to denote a square diagonal matrix whose diagonal entries are given by the entries of the vector v. diagonal matrices are of interest in part because multiplying by a diagonal matrix is very computationally [UNK]. to compute diag ( v ) x, we only need to scale each element xi by vi. in other words, diag ( v ) x = v x. inverting a square diagonal matrix is also [UNK]. the inverse exists only if every diagonal entry is nonzero, and in that case, diag ( v ) −1 = diag ( [ 1 / v 1,..., 1 / vn ] ). in many cases, we may derive some very general machine learning algorithm in terms of arbitrary matrices, but obtain a less expensive ( and less descriptive ) algorithm by restricting some matrices to be diagonal. not all diagonal matrices need be square. it is possible to construct a rectangular diagonal matrix. non - square diagonal matrices do not have inverses but it is still possible to multiply by them cheaply. for a non - square
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
56
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
to be diagonal. not all diagonal matrices need be square. it is possible to construct a rectangular diagonal matrix. non - square diagonal matrices do not have inverses but it is still possible to multiply by them cheaply. for a non - square diagonal matrix d, the product dx will involve scaling each element of x, and either concatenating some zeros to the result if d is taller than it is wide, or discarding some of the last elements of the vector if is wider than it is tall. d a matrix is any matrix that is equal to its own transpose : symmetric a a =. ( 2. 35 ) symmetric matrices often arise when the entries are generated by some function of two arguments that does not depend on the order of the arguments. for example, if a is a matrix of distance measurements, with ai, j giving the distance from point i to point, then j ai, j = aj, i because distance functions are symmetric. a is a vector with : unit vector unit norm | | | | x 2 = 1. ( 2. 36 ) a vector x and a vector y are orthogonal to each other if xy = 0. if both vectors have nonzero norm, this means that they are at a 90
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
56
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
norm | | | | x 2 = 1. ( 2. 36 ) a vector x and a vector y are orthogonal to each other if xy = 0. if both vectors have nonzero norm, this means that they are at a 90 degree angle to each other. in rn, at most n vectors may be mutually orthogonal with nonzero norm. if the vectors are not only orthogonal but also have unit norm, we call them orthonormal. an orthogonal matrix is a square matrix whose rows are mutually orthonor - mal and whose columns are mutually orthonormal : aa aa = = i. ( 2. 37 ) 41
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
56
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 2. linear algebra this implies that a−1 = a, ( 2. 38 ) so orthogonal matrices are of interest because their inverse is very cheap to compute. pay careful attention to the definition of orthogonal matrices. counterintuitively, their rows are not merely orthogonal but fully orthonormal. there is no special term for a matrix whose rows or columns are orthogonal but not orthonormal. 2. 7 eigendecomposition many mathematical objects can be understood better by breaking them into constituent parts, or finding some properties of them that are universal, not caused by the way we choose to represent them. for example, integers can be decomposed into prime factors. the way we represent the number will change depending on whether we write it in base ten 12 or in binary, but it will always be true that 12 = 2× 2×3. from this representation we can conclude useful properties, such as that is not divisible by, or that any 12 5 integer multiple of will be divisible by. 12 3 much as we can discover something about the true nature of an integer by decomposing it into prime factors, we can also decompose matrices in ways that show us information about their functional properties
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
57
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
multiple of will be divisible by. 12 3 much as we can discover something about the true nature of an integer by decomposing it into prime factors, we can also decompose matrices in ways that show us information about their functional properties that is not obvious from the representation of the matrix as an array of elements. one of the most widely used kinds of matrix decomposition is called eigen - decomposition, in which we decompose a matrix into a set of eigenvectors and eigenvalues. an eigenvector of a square matrix a is a non - zero vector v such that multi - plication by alters only the scale of : a v av v = λ. ( 2. 39 ) the scalar λ is known as the eigenvalue corresponding to this eigenvector. ( one can also find a left eigenvector such that va = λv, but we are usually concerned with right eigenvectors ). if v is an eigenvector of a, then so is any rescaled vector sv for s, s ∈r = 0. moreover, sv still has the same eigenvalue. for this reason, we usually only look for unit eigenvectors.
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
57
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
##vector of a, then so is any rescaled vector sv for s, s ∈r = 0. moreover, sv still has the same eigenvalue. for this reason, we usually only look for unit eigenvectors. suppose that a matrix a has n linearly independent eigenvectors, { v ( 1 ),..., v ( ) n }, with corresponding eigenvalues { λ1,..., λn }. we may concatenate all of the 42
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
57
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 2. linear algebra figure 2. 3 : an example of the [UNK] of eigenvectors and eigenvalues. here, we have a matrix a with two orthonormal eigenvectors, v ( 1 ) with eigenvalue λ1 and v ( 2 ) with eigenvalue λ2. ( left ) we plot the set of all unit vectors u ∈r2 as a unit circle. ( right ) we plot the set of all points au. by observing the way that a distorts the unit circle, we can see that it scales space in direction v ( ) i by λi. eigenvectors to form a matrix v with one eigenvector per column : v = [ v ( 1 ),..., v ( ) n ]. likewise, we can concatenate the eigenvalues to form a vector λ = [ λ1,..., λn ]. the of is then given by eigendecomposition a a v λ v = diag ( ) −1. ( 2. 40 ) we have seen that constructing matrices with specific eigenvalues and eigenvec - tors allows us to stretch space in desired
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
58
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
##composition a a v λ v = diag ( ) −1. ( 2. 40 ) we have seen that constructing matrices with specific eigenvalues and eigenvec - tors allows us to stretch space in desired directions. however, we often want to decompose matrices into their eigenvalues and eigenvectors. doing so can help us to analyze certain properties of the matrix, much as decomposing an integer into its prime factors can help us understand the behavior of that integer. not every matrix can be decomposed into eigenvalues and eigenvectors. in some 43
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
58
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 2. linear algebra cases, the decomposition exists, but may involve complex rather than real numbers. fortunately, in this book, we usually need to decompose only a specific class of matrices that have a simple decomposition. specifically, every real symmetric matrix can be decomposed into an expression using only real - valued eigenvectors and eigenvalues : a q q = λ, ( 2. 41 ) where q is an orthogonal matrix composed of eigenvectors of a, and λ is a diagonal matrix. the eigenvalue λi, i is associated with the eigenvector in column i of q, denoted as q :, i. because q is an orthogonal matrix, we can think of a as scaling space by λi in direction v ( ) i. see figure for an example. 2. 3 while any real symmetric matrix a is guaranteed to have an eigendecomposi - tion, the eigendecomposition may not be unique. if any two or more eigenvectors share the same eigenvalue, then any set of orthogonal vectors lying in their span are also eigenvectors with that eigenvalue, and we could equivalent
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
59
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
##position may not be unique. if any two or more eigenvectors share the same eigenvalue, then any set of orthogonal vectors lying in their span are also eigenvectors with that eigenvalue, and we could equivalently choose a q using those eigenvectors instead. by convention, we usually sort the entries of λ in descending order. under this convention, the eigendecomposition is unique only if all of the eigenvalues are unique. the eigendecomposition of a matrix tells us many useful facts about the matrix. the matrix is singular if and only if any of the eigenvalues are zero. the eigendecomposition of a real symmetric matrix can also be used to optimize quadratic expressions of the form f ( x ) = xax subject to | | | | x 2 = 1. whenever x is equal to an eigenvector of a, f takes on the value of the corresponding eigenvalue. the maximum value of f within the constraint region is the maximum eigenvalue and its minimum value within the constraint region is the minimum eigenvalue. a matrix whose eigenvalues are all positive is called positive definite. a
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
59
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
maximum value of f within the constraint region is the maximum eigenvalue and its minimum value within the constraint region is the minimum eigenvalue. a matrix whose eigenvalues are all positive is called positive definite. a matrix whose eigenvalues are all positive or zero - valued is calledpositive semidefi - nite. likewise, if all eigenvalues are negative, the matrix is negative definite, and if all eigenvalues are negative or zero - valued, it is negative semidefinite. positive semidefinite matrices are interesting because they guarantee that [UNK] x, ax ≥0. positive definite matrices additionally guarantee that xax x = 0 ⇒ = 0. 2. 8 singular value decomposition in section, we saw how to decompose a matrix into eigenvectors and eigenvalues. 2. 7 the singular value decomposition ( svd ) provides another way to factorize a matrix, into singular vectors and singular values. the svd allows us to discover some of the same kind of information as the eigendecomposition. however, 44
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
59
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 2. linear algebra the svd is more generally applicable. every real matrix has a singular value decomposition, but the same is not true of the eigenvalue decomposition. for example, if a matrix is not square, the eigendecomposition is not defined, and we must use a singular value decomposition instead. recall that the eigendecomposition involves analyzing a matrix a to discover a matrix v of eigenvectors and a vector of eigenvalues λ such that we can rewrite a as a v λ v = diag ( ) −1. ( 2. 42 ) the singular value decomposition is similar, except this time we will write a as a product of three matrices : a udv =. ( 2. 43 ) suppose that a is an m n × matrix. then u is defined to be an m m × matrix, d v to be an matrix, and m n × to be an matrix. n n × each of these matrices is defined to have a special structure. the matrices u and v are both defined to be orthogonal matrices. the matrix d is defined to be a diagonal matrix. note that is not necessarily square. d the elements along the diagonal of d
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
60
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
##ned to have a special structure. the matrices u and v are both defined to be orthogonal matrices. the matrix d is defined to be a diagonal matrix. note that is not necessarily square. d the elements along the diagonal of d are known as the singular values of the matrix a. the columns of u are known as the left - singular vectors. the columns of are known as as the v right - singular vectors. we can actually interpret the singular value decomposition of a in terms of the eigendecomposition of functions of a. the left - singular vectors of a are the eigenvectors of aa. the right - singular vectors of a are the eigenvectors of aa. the non - zero singular values of a are the square roots of the eigenvalues of aa. the same is true for aa. perhaps the most useful feature of the svd is that we can use it to partially generalize matrix inversion to non - square matrices, as we will see in the next section. 2. 9 the moore - penrose pseudoinverse matrix inversion is not defined for matrices that are not square. suppose we want to make a left - inverse of a matrix, so that we can solve a linear equation b a
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
60
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
section. 2. 9 the moore - penrose pseudoinverse matrix inversion is not defined for matrices that are not square. suppose we want to make a left - inverse of a matrix, so that we can solve a linear equation b a ax y = ( 2. 44 ) 45
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
60
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 2. linear algebra by left - multiplying each side to obtain x by =. ( 2. 45 ) depending on the structure of the problem, it may not be possible to design a unique mapping from to. a b if a is taller than it is wide, then it is possible for this equation to have no solution. if a is wider than it is tall, then there could be multiple possible solutions. the moore - penrose pseudoinverse allows us to make some headway in these cases. the pseudoinverse of is defined as a matrix a a + = lim α0 ( a a i + α ) −1a. ( 2. 46 ) practical algorithms for computing the pseudoinverse are not based on this defini - tion, but rather the formula a + = v d + u, ( 2. 47 ) where u, d and v are the singular value decomposition of a, and the pseudoinverse d + of a diagonal matrix d is obtained by taking the reciprocal of its non - zero elements then taking the transpose of the resulting matrix. when a has more columns than rows, then solving a linear equation using the pseudoinverse provides one of the many possible solutions. specifically,
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
61
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
the reciprocal of its non - zero elements then taking the transpose of the resulting matrix. when a has more columns than rows, then solving a linear equation using the pseudoinverse provides one of the many possible solutions. specifically, it provides the solution x = a + y with minimal euclidean norm | | | | x 2 among all possible solutions. when a has more rows than columns, it is possible for there to be no solution. in this case, using the pseudoinverse gives us the x for which ax is as close as possible to in terms of euclidean norm y | | − | | ax y 2. 2. 10 the trace operator the trace operator gives the sum of all of the diagonal entries of a matrix : tr ( ) = a i ai, i. ( 2. 48 ) the trace operator is useful for a variety of reasons. some operations that are [UNK] to specify without resorting to summation notation can be specified using 46
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
61
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 2. linear algebra matrix products and the trace operator. for example, the trace operator provides an alternative way of writing the frobenius norm of a matrix : | | | | a f = tr ( aa ). ( 2. 49 ) writing an expression in terms of the trace operator opens up opportunities to manipulate the expression using many useful identities. for example, the trace operator is invariant to the transpose operator : tr ( ) = tr ( a a ). ( 2. 50 ) the trace of a square matrix composed of many factors is also invariant to moving the last factor into the first position, if the shapes of the corresponding matrices allow the resulting product to be defined : tr ( ) = tr ( ) = tr ( ) abc cab bca ( 2. 51 ) or more generally, tr ( n i = 1 f ( ) i ) = tr ( f ( ) n n−1 i = 1 f ( ) i ). ( 2. 52 ) this invariance to cyclic permutation holds even if the resulting product has a [UNK] shape. for example, for a ∈rm n × and b ∈rn m ×, we have tr ( ) = tr ( ) ab ba ( 2. 53 ) even though ab ∈rm
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
62
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
##mutation holds even if the resulting product has a [UNK] shape. for example, for a ∈rm n × and b ∈rn m ×, we have tr ( ) = tr ( ) ab ba ( 2. 53 ) even though ab ∈rm m × and ba ∈rn n ×. another useful fact to keep in mind is that a scalar is its own trace : a = tr ( a ). 2. 11 the determinant the determinant of a square matrix, denoted det ( a ), is a function mapping matrices to real scalars. the determinant is equal to the product of all the eigenvalues of the matrix. the absolute value of the determinant can be thought of as a measure of how much multiplication by the matrix expands or contracts space. if the determinant is 0, then space is contracted completely along at least one dimension, causing it to lose all of its volume. if the determinant is 1, then the transformation preserves volume. 47
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
62
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 2. linear algebra 2. 12 example : principal components analysis one simple machine learning algorithm, principal components analysis or pca can be derived using only knowledge of basic linear algebra. suppose we have a collection of m points { x ( 1 ),..., x ( ) m } in rn. suppose we would like to apply lossy compression to these points. lossy compression means storing the points in a way that requires less memory but may lose some precision. we would like to lose as little precision as possible. one way we can encode these points is to represent a lower - dimensional version of them. for each point x ( ) i ∈rn we will find a corresponding code vector c ( ) i ∈rl. if l is smaller than n, it will take less memory to store the code points than the original data. we will want to find some encoding function that produces the code for an input, f ( x ) = c, and a decoding function that produces the reconstructed input given its code,. x x ≈g f ( ( ) ) pca is defined by our choice of the decoding function. specifically, to make the decoder very simple, we choose to use matrix multiplication to map
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
63
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
its code,. x x ≈g f ( ( ) ) pca is defined by our choice of the decoding function. specifically, to make the decoder very simple, we choose to use matrix multiplication to map the code back into rn. let, where g ( ) = c dc d ∈rn l × is the matrix defining the decoding. computing the optimal code for this decoder could be a [UNK] problem. to keep the encoding problem easy, pca constrains the columns of d to be orthogonal to each other. ( note that d is still not technically “ an orthogonal matrix ” unless l n = ) with the problem as described so far, many solutions are possible, because we can increase the scale of d :, i if we decrease ci proportionally for all points. to give the problem a unique solution, we constrain all of the columns of to have unit d norm. in order to turn this basic idea into an algorithm we can implement, the first thing we need to do is figure out how to generate the optimal code point c∗for each input point x. one way to do this is to minimize the distance between the input point x and its reconstruction, g ( c
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
63
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
the first thing we need to do is figure out how to generate the optimal code point c∗for each input point x. one way to do this is to minimize the distance between the input point x and its reconstruction, g ( c∗ ). we can measure this distance using a norm. in the principal components algorithm, we use the l2 norm : c∗ = arg min c | | − | | x g ( ) c 2. ( 2. 54 ) we can switch to the squared l2 norm instead of the l2 norm itself, because both are minimized by the same value of c. both are minimized by the same value of c because the l2 norm is non - negative and the squaring operation is 48
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
63
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 2. linear algebra monotonically increasing for non - negative arguments. c∗ = arg min c | | − | | x g ( ) c 2 2. ( 2. 55 ) the function being minimized simplifies to ( ( ) ) x −g c ( ( ) ) x −g c ( 2. 56 ) ( by the definition of the l2 norm, equation ) 2. 30 = xx x − g g ( ) c − ( ) c x c + ( g ) g ( ) c ( 2. 57 ) ( by the distributive property ) = xx x −2 g g ( ) + c ( ) c g ( ) c ( 2. 58 ) ( because the scalar g ( ) c x is equal to the transpose of itself ). we can now change the function being minimized again, to omit the first term, since this term does not depend on : c c∗ = arg min c −2xg g ( ) + c ( ) c g. ( ) c ( 2. 59 ) to make further progress, we must substitute in the definition of : g ( ) c c∗ = arg min c −
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
64
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
−2xg g ( ) + c ( ) c g. ( ) c ( 2. 59 ) to make further progress, we must substitute in the definition of : g ( ) c c∗ = arg min c −2xdc c + ddc ( 2. 60 ) = arg min c −2xdc c + ilc ( 2. 61 ) ( by the orthogonality and unit norm constraints on ) d = arg min c −2xdc c + c ( 2. 62 ) we can solve this optimization problem using vector calculus ( see section if 4. 3 you do not know how to do this ) : ∇c ( 2 −xdc c + c ) = 0 ( 2. 63 ) −2dx c + 2 = 0 ( 2. 64 ) c d = x. ( 2. 65 ) 49
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
64
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 2. linear algebra this makes the algorithm [UNK] : we can optimally encode x just using a matrix - vector operation. to encode a vector, we apply the encoder function f ( ) = x dx. ( 2. 66 ) using a further matrix multiplication, we can also define the pca reconstruction operation : r g f ( ) = x ( ( ) ) = x ddx. ( 2. 67 ) next, we need to choose the encoding matrix d. to do so, we revisit the idea of minimizing the l2 distance between inputs and reconstructions. since we will use the same matrix d to decode all of the points, we can no longer consider the points in isolation. instead, we must minimize the frobenius norm of the matrix of errors computed over all dimensions and all points : d∗ = arg min d i, j x ( ) i j −r ( x ( ) i ) j 2 subject to dd i = l ( 2. 68 ) to derive the algorithm for finding d∗, we will start by considering the case where l = 1. in this case, d is just a single vector, d. substituting equation 2. 67 into equation and sim
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
65
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
( 2. 68 ) to derive the algorithm for finding d∗, we will start by considering the case where l = 1. in this case, d is just a single vector, d. substituting equation 2. 67 into equation and simplifying into, the problem reduces to 2. 68 d d d∗ = arg min d i | | x ( ) i −ddx ( ) i | | 2 2 subject to | | | | d 2 = 1. ( 2. 69 ) the above formulation is the most direct way of performing the substitution, but is not the most stylistically pleasing way to write the equation. it places the scalar value dx ( ) i on the right of the vector d. it is more conventional to write scalar [UNK] on the left of vector they operate on. we therefore usually write such a formula as d∗ = arg min d i | | x ( ) i −dx ( ) i d | | 2 2 subject to | | | | d 2 = 1, ( 2. 70 ) or, exploiting the fact that a scalar is its own transpose, as d∗ = arg min d i | | x ( ) i −x ( ) i dd | | 2
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
65
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
d 2 = 1, ( 2. 70 ) or, exploiting the fact that a scalar is its own transpose, as d∗ = arg min d i | | x ( ) i −x ( ) i dd | | 2 2 subject to | | | | d 2 = 1. ( 2. 71 ) the reader should aim to become familiar with such cosmetic rearrangements. 50
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
65
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 2. linear algebra at this point, it can be helpful to rewrite the problem in terms of a single design matrix of examples, rather than as a sum over separate example vectors. this will allow us to use more compact notation. let x ∈rm n × be the matrix defined by stacking all of the vectors describing the points, such that xi, : = x ( ) i. we can now rewrite the problem as d∗ = arg min d | | − x xdd | | 2 f subject to dd = 1. ( 2. 72 ) disregarding the constraint for the moment, we can simplify the frobenius norm portion as follows : arg min d | | − x xdd | | 2 f ( 2. 73 ) = arg min d tr x xdd − x xdd − ( 2. 74 ) ( by equation ) 2. 49 = arg min d tr ( xx x − xdd−ddxx dd + xxdd ) ( 2. 75 ) = arg min d tr ( xx ) tr ( − xxdd ) tr ( − ddx x ) + tr ( ddxxdd ) ( 2. 76 ) = arg min d −tr ( xxdd ) tr (
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
66
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
. 75 ) = arg min d tr ( xx ) tr ( − xxdd ) tr ( − ddx x ) + tr ( ddxxdd ) ( 2. 76 ) = arg min d −tr ( xxdd ) tr ( − ddxx ) + tr ( ddxxdd ) ( 2. 77 ) ( because terms not involving do not [UNK] the ) d arg min = arg min d −2 tr ( xxdd ) + tr ( ddxxdd ) ( 2. 78 ) ( because we can cycle the order of the matrices inside a trace, equation ) 2. 52 = arg min d −2 tr ( xxdd ) + tr ( xxdddd ) ( 2. 79 ) ( using the same property again ) at this point, we re - introduce the constraint : arg min d −2 tr ( xxdd ) + tr ( xxdddd ) subject to dd = 1 ( 2. 80 ) = arg min d −2 tr ( xxdd ) + tr ( xxdd ) subject to dd = 1 ( 2. 81 ) ( due to the constraint ) = arg min d −tr ( xxdd ) subject to dd = 1 ( 2. 82 ) 51
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
66
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 2. linear algebra = arg max d tr ( xxdd ) subject to dd = 1 ( 2. 83 ) = arg max d tr ( dxxd d ) subject to d = 1 ( 2. 84 ) this optimization problem may be solved using eigendecomposition. specifically, the optimal d is given by the eigenvector of xx corresponding to the largest eigenvalue. this derivation is specific to the case of l = 1 and recovers only the first principal component. more generally, when we wish to recover a basis of principal components, the matrix d is given by the l eigenvectors corresponding to the largest eigenvalues. this may be shown using proof by induction. we recommend writing this proof as an exercise. linear algebra is one of the fundamental mathematical disciplines that is necessary to understand deep learning. another key area of mathematics that is ubiquitous in machine learning is probability theory, presented next. 52
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
67
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 3 probability and information theory in this chapter, we describe probability theory and information theory. probability theory is a mathematical framework for representing uncertain statements. it provides a means of quantifying uncertainty and axioms for deriving new uncertain statements. in artificial intelligence applications, we use probability theory in two major ways. first, the laws of probability tell us how ai systems should reason, so we design our algorithms to compute or approximate various expressions derived using probability theory. second, we can use probability and statistics to theoretically analyze the behavior of proposed ai systems. probability theory is a fundamental tool of many disciplines of science and engineering. we provide this chapter to ensure that readers whose background is primarily in software engineering with limited exposure to probability theory can understand the material in this book. while probability theory allows us to make uncertain statements and reason in the presence of uncertainty, information theory allows us to quantify the amount of uncertainty in a probability distribution. if you are already familiar with probability theory and information theory, you may wish to skip all of this chapter except for section, which describes the 3. 14 graphs we use to describe structured probabilistic models for machine learning. if you have absolutely no prior experience with these subjects, this chapter should be [UNK] to successfully carry out deep
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
68
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
all of this chapter except for section, which describes the 3. 14 graphs we use to describe structured probabilistic models for machine learning. if you have absolutely no prior experience with these subjects, this chapter should be [UNK] to successfully carry out deep learning research projects, but we do suggest that you consult an additional resource, such as jaynes 2003 ( ). 53
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
68
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 3. probability and information theory 3. 1 why probability? many branches of computer science deal mostly with entities that are entirely deterministic and certain. a programmer can usually safely assume that a cpu will execute each machine instruction flawlessly. errors in hardware do occur, but are rare enough that most software applications do not need to be designed to account for them. given that many computer scientists and software engineers work in a relatively clean and certain environment, it can be surprising that machine learning makes heavy use of probability theory. this is because machine learning must always deal with uncertain quantities, and sometimes may also need to deal with stochastic ( non - deterministic ) quantities. uncertainty and stochasticity can arise from many sources. researchers have made compelling arguments for quantifying uncertainty using probability since at least the 1980s. many of the arguments presented here are summarized from or inspired by pearl 1988 ( ). nearly all activities require some ability to reason in the presence of uncertainty. in fact, beyond mathematical statements that are true by definition, it is [UNK] to think of any proposition that is absolutely true or any event that is absolutely guaranteed to occur. there are three possible sources of uncertainty : 1. inherent stochasticity in the system being modeled.
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
69
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
true by definition, it is [UNK] to think of any proposition that is absolutely true or any event that is absolutely guaranteed to occur. there are three possible sources of uncertainty : 1. inherent stochasticity in the system being modeled. for example, most interpretations of quantum mechanics describe the dynamics of subatomic particles as being probabilistic. we can also create theoretical scenarios that we postulate to have random dynamics, such as a hypothetical card game where we assume that the cards are truly [UNK] into a random order. 2. incomplete observability. even deterministic systems can appear stochastic when we cannot observe all of the variables that drive the behavior of the system. for example, in the monty hall problem, a game show contestant is asked to choose between three doors and wins a prize held behind the chosen door. two doors lead to a goat while a third leads to a car. the outcome given the contestant ’ s choice is deterministic, but from the contestant ’ s point of view, the outcome is uncertain. 3. incomplete modeling. when we use a model that must discard some of the information we have observed, the discarded information results in uncertainty in the model ’ s predictions. for example, suppose we build a
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
69
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
of view, the outcome is uncertain. 3. incomplete modeling. when we use a model that must discard some of the information we have observed, the discarded information results in uncertainty in the model ’ s predictions. for example, suppose we build a robot that can exactly observe the location of every object around it. if the 54
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
69
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 3. probability and information theory robot discretizes space when predicting the future location of these objects, then the discretization makes the robot immediately become uncertain about the precise position of objects : each object could be anywhere within the discrete cell that it was observed to occupy. in many cases, it is more practical to use a simple but uncertain rule rather than a complex but certain one, even if the true rule is deterministic and our modeling system has the fidelity to accommodate a complex rule. for example, the simple rule “ most birds fly ” is cheap to develop and is broadly useful, while a rule of the form, “ birds fly, except for very young birds that have not yet learned to fly, sick or injured birds that have lost the ability to fly, flightless species of birds including the cassowary, ostrich and kiwi... ” is expensive to develop, maintain and communicate, and after all of this [UNK] is still very brittle and prone to failure. while it should be clear that we need a means of representing and reasoning about uncertainty, it is not immediately obvious that probability theory can provide all of the tools we want for artificial intelligence applications. probability theory was originally developed to analyze
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
70
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
failure. while it should be clear that we need a means of representing and reasoning about uncertainty, it is not immediately obvious that probability theory can provide all of the tools we want for artificial intelligence applications. probability theory was originally developed to analyze the frequencies of events. it is easy to see how probability theory can be used to study events like drawing a certain hand of cards in a game of poker. these kinds of events are often repeatable. when we say that an outcome has a probability p of occurring, it means that if we repeated the experiment ( e. g., draw a hand of cards ) infinitely many times, then proportion p of the repetitions would result in that outcome. this kind of reasoning does not seem immediately applicable to propositions that are not repeatable. if a doctor analyzes a patient and says that the patient has a 40 % chance of having the flu, this means something very [UNK] — we can not make infinitely many replicas of the patient, nor is there any reason to believe that [UNK] replicas of the patient would present with the same symptoms yet have varying underlying conditions. in the case of the doctor diagnosing the patient, we use probability to represent a degree of belief,
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
70
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
patient, nor is there any reason to believe that [UNK] replicas of the patient would present with the same symptoms yet have varying underlying conditions. in the case of the doctor diagnosing the patient, we use probability to represent a degree of belief, with 1 indicating absolute certainty that the patient has the flu and 0 indicating absolute certainty that the patient does not have the flu. the former kind of probability, related directly to the rates at which events occur, is known as frequentist probability, while the latter, related to qualitative levels of certainty, is known as bayesian probability. if we list several properties that we expect common sense reasoning about uncertainty to have, then the only way to satisfy those properties is to treat bayesian probabilities as behaving exactly the same as frequentist probabilities. for example, if we want to compute the probability that a player will win a poker game given that she has a certain set of cards, we use exactly the same formulas as when we compute the probability that a patient has a disease given that she 55
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
70
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 3. probability and information theory has certain symptoms. for more details about why a small set of common sense assumptions implies that the same axioms must control both kinds of probability, see ( ). ramsey 1926 probability can be seen as the extension of logic to deal with uncertainty. logic provides a set of formal rules for determining what propositions are implied to be true or false given the assumption that some other set of propositions is true or false. probability theory provides a set of formal rules for determining the likelihood of a proposition being true given the likelihood of other propositions. 3. 2 random variables a random variable is a variable that can take on [UNK] values randomly. we typically denote the random variable itself with a lower case letter in plain typeface, and the values it can take on with lower case script letters. for example, x1 and x2 are both possible values that the random variable x can take on. for vector - valued variables, we would write the random variable as x and one of its values as x. on its own, a random variable is just a description of the states that are possible ; it must be coupled with a probability distribution that specifies how likely each of these states are. random variables may be discrete or continuous. a discrete
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
71
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
. on its own, a random variable is just a description of the states that are possible ; it must be coupled with a probability distribution that specifies how likely each of these states are. random variables may be discrete or continuous. a discrete random variable is one that has a finite or countably infinite number of states. note that these states are not necessarily the integers ; they can also just be named states that are not considered to have any numerical value. a continuous random variable is associated with a real value. 3. 3 probability distributions a probability distribution is a description of how likely a random variable or set of random variables is to take on each of its possible states. the way we describe probability distributions depends on whether the variables are discrete or continuous. 3. 3. 1 discrete variables and probability mass functions a probability distribution over discrete variables may be described using a proba - bility mass function ( pmf ). we typically denote probability mass functions with a capital p. often we associate each random variable with a [UNK] probability 56
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
71
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 3. probability and information theory mass function and the reader must infer which probability mass function to use based on the identity of the random variable, rather than the name of the function ; p p ( ) x is usually not the same as ( ) y. the probability mass function maps from a state of a random variable to the probability of that random variable taking on that state. the probability that x = x is denoted as p ( x ), with a probability of 1 indicating that x = x is certain and a probability of 0 indicating that x = x is impossible. sometimes to disambiguate which pmf to use, we write the name of the random variable explicitly : p ( x = x ). sometimes we define a variable first, then use [UNK] to specify which distribution it follows later : x x. [UNK] ( ) probability mass functions can act on many variables at the same time. such a probability distribution over many variables is known as a joint probability distribution. p ( x = x, y = y ) denotes the probability that x = x and y = y simultaneously. we may also write for brevity. p x, y ( ) to be a probability mass function on a random variable x, a function p must satisfy
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
72
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
, y = y ) denotes the probability that x = x and y = y simultaneously. we may also write for brevity. p x, y ( ) to be a probability mass function on a random variable x, a function p must satisfy the following properties : • the domain of must be the set of all possible states of x. p • [UNK] ∈ x x, 0 ≤p ( x ) ≤1. an impossible event has probability and no state can 0 be less probable than that. likewise, an event that is guaranteed to happen has probability, and no state can have a greater chance of occurring. 1 • x∈x p ( x ) = 1. we refer to this property as being normalized. without this property, we could obtain probabilities greater than one by computing the probability of one of many events occurring. for example, consider a single discrete random variable x with k [UNK] states. we can place a uniform distribution on x — that is, make each of its states equally likely — by setting its probability mass function to p x ( = x i ) = 1 k ( 3. 1 ) for all i. we can see that this fits the requirements for a probability mass function. the value 1 k is positive because is a positive
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
72
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
setting its probability mass function to p x ( = x i ) = 1 k ( 3. 1 ) for all i. we can see that this fits the requirements for a probability mass function. the value 1 k is positive because is a positive integer. we also see that k i p x ( = x i ) = i 1 k = k k = 1, ( 3. 2 ) so the distribution is properly normalized. 57
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
72
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 3. probability and information theory 3. 3. 2 continuous variables and probability density functions when working with continuous random variables, we describe probability distri - butions using a probability density function ( pdf ) rather than a probability mass function. to be a probability density function, a function p must satisfy the following properties : • the domain of must be the set of all possible states of x. p • [UNK] ∈ ≥ ≤ x x, p x ( ) 0 ( ). p note that we do not require x 1. • p x dx ( ) = 1. a probability density function p ( x ) does not give the probability of a specific state directly, instead the probability of landing inside an infinitesimal region with volume is given by. δx p x δx ( ) we can integrate the density function to find the actual probability mass of a set of points. specifically, the probability that x lies in some set s is given by the integral of p ( x ) over that set. in the univariate example, the probability that x lies in the interval is given by [ ] a, b [ ] a, b p x dx ( ). for an example of a probability
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
73
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
of p ( x ) over that set. in the univariate example, the probability that x lies in the interval is given by [ ] a, b [ ] a, b p x dx ( ). for an example of a probability density function corresponding to a specific probability density over a continuous random variable, consider a uniform distribu - tion on an interval of the real numbers. we can do this with a function u ( x ; a, b ), where a and b are the endpoints of the interval, with b > a. the “ ; ” notation means “ parametrized by ” ; we consider x to be the argument of the function, while a and b are parameters that define the function. to ensure that there is no probability mass outside the interval, we say u ( x ; a, b ) = 0 for all x ∈ [ a, b ] [. within a, b ], u x a, b ( ; ) = 1 b a −. we can see that this is nonnegative everywhere. additionally, it integrates to 1. we often denote that x follows the uniform distribution on [ a, b ] by writing x. [UNK] a, b ( ) 3.
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
73
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
b a −. we can see that this is nonnegative everywhere. additionally, it integrates to 1. we often denote that x follows the uniform distribution on [ a, b ] by writing x. [UNK] a, b ( ) 3. 4 marginal probability sometimes we know the probability distribution over a set of variables and we want to know the probability distribution over just a subset of them. the probability distribution over the subset is known as the distribution. marginal probability for example, suppose we have discrete random variables x and y, and we know p, ( x y. we can find x with the : ) p ( ) sum rule [UNK] ∈ x x x, p ( = ) = x y p x, y. ( = x y = ) ( 3. 3 ) 58
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
73
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 3. probability and information theory the name “ marginal probability ” comes from the process of computing marginal probabilities on paper. when the values of p ( x y, ) are written in a grid with [UNK] values of x in rows and [UNK] values of y in columns, it is natural to sum across a row of the grid, then write p ( x ) in the margin of the paper just to the right of the row. for continuous variables, we need to use integration instead of summation : p x ( ) = p x, y dy. ( ) ( 3. 4 ) 3. 5 conditional probability in many cases, we are interested in the probability of some event, given that some other event has happened. this is called a conditional probability. we denote the conditional probability that y = y given x = x as p ( y = y | x = x ). this conditional probability can be computed with the formula p y x ( = y | x = ) = p y, x ( = y x = ) p x ( = x ). ( 3. 5 ) the conditional probability is only defined when p ( x = x ) > 0. we cannot compute the conditional probability conditioned on an event that never happens. it is
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
74
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
y x = ) p x ( = x ). ( 3. 5 ) the conditional probability is only defined when p ( x = x ) > 0. we cannot compute the conditional probability conditioned on an event that never happens. it is important not to confuse conditional probability with computing what would happen if some action were undertaken. the conditional probability that a person is from germany given that they speak german is quite high, but if a randomly selected person is taught to speak german, their country of origin does not change. computing the consequences of an action is called making an intervention query. intervention queries are the domain of causal modeling, which we do not explore in this book. 3. 6 the chain rule of conditional probabilities any joint probability distribution over many random variables may be decomposed into conditional distributions over only one variable : p ( x ( 1 ),..., x ( ) n ) = ( p x ( 1 ) ) πn i = 2p ( x ( ) i | x ( 1 ),..., x ( 1 ) i− ). ( 3. 6 ) this observation is known as the chain rule or product rule of probability. it follows immediately from the definition of conditional probability in equation.
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
74
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
( 1 ),..., x ( 1 ) i− ). ( 3. 6 ) this observation is known as the chain rule or product rule of probability. it follows immediately from the definition of conditional probability in equation. 3. 5 59
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
74
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 3. probability and information theory for example, applying the definition twice, we get p,, p, p, ( a b c ) = ( a b | c ) ( b c ) p, p p ( b c ) = ( ) b c | ( ) c p,, p, p p. ( a b c ) = ( a b | c ) ( ) b c | ( ) c 3. 7 independence and conditional independence two random variables x and y are independent if their probability distribution can be expressed as a product of two factors, one involving only x and one involving only y : [UNK] ∈ ∈ x x, y y x y x y ( 3. 7 ), p ( = x, = ) = ( y p = ) ( x p = ) y. two random variables x and y areconditionally independent given a random variable z if the conditional probability distribution over x and y factorizes in this way for every value of z : [UNK] ∈ ∈ ∈ | | | x x, y y, z z x y, p ( = x, = y z x = ) = ( z p = x z y = ) ( z p = y z = ) z. ( 3. 8 ) we
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
75
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
| | x x, y y, z z x y, p ( = x, = y z x = ) = ( z p = x z y = ) ( z p = y z = ) z. ( 3. 8 ) we can denote independence and conditional independence with compact notation : x y [UNK] means that x and y are independent, while x y z [UNK] | means that x and y are conditionally independent given z. 3. 8 expectation, variance and covariance the expectation or expected value of some function f ( x ) with respect to a probability distribution p ( x ) is the average or mean value that f takes on when x is drawn from. for discrete variables this can be computed with a summation : p [UNK] [ ( ) ] = f x x p x f x, ( ) ( ) ( 3. 9 ) while for continuous variables, it is computed with an integral : [UNK] [ ( ) ] = f x p x f x dx. ( ) ( ) ( 3. 10 ) 60
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
75
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 3. probability and information theory when the identity of the distribution is clear from the context, we may simply write the name of the random variable that the expectation is over, as in ex [ f ( x ) ]. if it is clear which random variable the expectation is over, we may omit the subscript entirely, as in e [ f ( x ) ]. by default, we can assume that e [ · ] averages over the values of all the random variables inside the brackets. likewise, when there is no ambiguity, we may omit the square brackets. expectations are linear, for example, ex [ ( ) + ( ) ] = αf x βg x αex [ ( ) ] + f x βex [ ( ) ] g x, ( 3. 11 ) when and are not dependent on. α β x the variance gives a measure of how much the values of a function of a random variable x vary as we sample [UNK] values of x from its probability distribution : var ( ( ) ) = f x e ( ( ) [ ( ) ] ) f x −e f x 2. ( 3. 12 ) when the variance is low, the values of f ( x ) cluster near their expected value. the square root of the
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
76
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
) = f x e ( ( ) [ ( ) ] ) f x −e f x 2. ( 3. 12 ) when the variance is low, the values of f ( x ) cluster near their expected value. the square root of the variance is known as the. standard deviation the covariance gives some sense of how much two values are linearly related to each other, as well as the scale of these variables : cov ( ( ) ( ) ) = [ ( ( ) [ ( ) ] ) ( ( ) [ ( ) ] ) ] f x, g y e f x −e f x g y −e g y. ( 3. 13 ) high absolute values of the covariance mean that the values change very much and are both far from their respective means at the same time. if the sign of the covariance is positive, then both variables tend to take on relatively high values simultaneously. if the sign of the covariance is negative, then one variable tends to take on a relatively high value at the times that the other takes on a relatively low value and vice versa. other measures such as correlation normalize the contribution of each variable in order to measure only how much the variables are related, rather than also being [UNK]
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
76
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
on a relatively high value at the times that the other takes on a relatively low value and vice versa. other measures such as correlation normalize the contribution of each variable in order to measure only how much the variables are related, rather than also being [UNK] by the scale of the separate variables. the notions of covariance and dependence are related, but are in fact distinct concepts. they are related because two variables that are independent have zero covariance, and two variables that have non - zero covariance are dependent. how - ever, independence is a distinct property from covariance. for two variables to have zero covariance, there must be no linear dependence between them. independence is a stronger requirement than zero covariance, because independence also excludes nonlinear relationships. it is possible for two variables to be dependent but have zero covariance. for example, suppose we first sample a real number x from a uniform distribution over the interval [ −1, 1 ]. we next sample a random variable 61
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
76
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 3. probability and information theory s. with probability 1 2, we choose the value of s to be. otherwise, we choose 1 the value of s to be −1. we can then generate a random variable y by assigning y = sx. clearly, x and y are not independent, because x completely determines the magnitude of. however, y cov ( ) = 0 x, y. the covariance matrix of a random vector x ∈rn is an n n × matrix, such that cov ( ) x i, j = cov ( xi, xj ). ( 3. 14 ) the diagonal elements of the covariance give the variance : cov ( xi, xi ) = var ( xi ). ( 3. 15 ) 3. 9 common probability distributions several simple probability distributions are useful in many contexts in machine learning. 3. 9. 1 bernoulli distribution the bernoulli distribution is a distribution over a single binary random variable. it is controlled by a single parameter φ ∈ [ 0, 1 ], which gives the probability of the random variable being equal to 1. it has the following properties : p φ ( = 1 ) = x ( 3. 16 ) p φ ( = 0 ) = 1 x
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
77
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
parameter φ ∈ [ 0, 1 ], which gives the probability of the random variable being equal to 1. it has the following properties : p φ ( = 1 ) = x ( 3. 16 ) p φ ( = 0 ) = 1 x − ( 3. 17 ) p x φ ( = x ) = x ( 1 ) −φ 1−x ( 3. 18 ) ex [ ] = x φ ( 3. 19 ) varx ( ) = ( 1 ) x φ −φ ( 3. 20 ) 3. 9. 2 multinoulli distribution the multinoulli or categorical distribution is a distribution over a single discrete variable with k [UNK] states, where k is finite. 1 the multinoulli distribution is 1 “ multinoulli ” is a term that was recently coined by gustavo lacerdo and popularized by murphy 2012 ( ). the multinoulli distribution is a special case of the multinomial distribution. a multinomial distribution is the distribution over vectors in { 0,..., n } k representing how many times each of the k categories is visited when n samples are drawn from a multinoulli distribution. many texts use the term “ multinomial ” to refer
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
77
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
over vectors in { 0,..., n } k representing how many times each of the k categories is visited when n samples are drawn from a multinoulli distribution. many texts use the term “ multinomial ” to refer to multinoulli distributions without clarifying that they refer only to the case. n = 1 62
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
77
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 3. probability and information theory parametrized by a vector p ∈ [ 0, 1 ] k−1, where pi gives the probability of the i - th state. the final, k - th state ’ s probability is given by 1−1p. note that we must constrain 1p ≤1. multinoulli distributions are often used to refer to distributions over categories of objects, so we do not usually assume that state 1 has numerical value 1, etc. for this reason, we do not usually need to compute the expectation or variance of multinoulli - distributed random variables. the bernoulli and multinoulli distributions are [UNK] to describe any distri - bution over their domain. they are able to describe any distribution over their domain not so much because they are particularly powerful but rather because their domain is simple ; they model discrete variables for which it is feasible to enumerate all of the states. when dealing with continuous variables, there are uncountably many states, so any distribution described by a small number of parameters must impose strict limits on the distribution. 3. 9. 3 gaussian distribution the most commonly used distribution over real numbers is the normal distribu - tion, also known as
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
78
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
, so any distribution described by a small number of parameters must impose strict limits on the distribution. 3. 9. 3 gaussian distribution the most commonly used distribution over real numbers is the normal distribu - tion, also known as the : gaussian distribution n ( ; x µ, σ2 ) = 1 2πσ2 exp −1 2σ2 ( ) x µ − 2. ( 3. 21 ) see figure for a plot of the density function. 3. 1 the two parameters µ ∈r and σ ∈ ( 0, ∞ ) control the normal distribution. the parameter µ gives the coordinate of the central peak. this is also the mean of the distribution : e [ x ] = µ. the standard deviation of the distribution is given by σ, and the variance by σ2. when we evaluate the pdf, we need to square and invert σ. when we need to frequently evaluate the pdf with [UNK] parameter values, a more [UNK] way of parametrizing the distribution is to use a parameter β ∈ ( 0, ∞ ) to control the precision or inverse variance of the distribution : n ( ; x µ, β−1 ) = β 2π exp −1 2β x µ ( − ) 2
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
78
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
the distribution is to use a parameter β ∈ ( 0, ∞ ) to control the precision or inverse variance of the distribution : n ( ; x µ, β−1 ) = β 2π exp −1 2β x µ ( − ) 2. ( 3. 22 ) normal distributions are a sensible choice for many applications. in the absence of prior knowledge about what form a distribution over the real numbers should take, the normal distribution is a good default choice for two major reasons. 63
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
78
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 3. probability and information theory − − − − 2 0. 1 5. 1 0. 0 5 0 0 0 5 1 0 1 5 2 0...... 0 00. 0 05. 0 10. 0 15. 0 20. 0 25. 0 30. 0 35. 0 40. p ( x ) maximum at = x µ inflection points at x µ σ = ± figure 3. 1 : the normal distribution : the normal distribution n ( x ; µ, σ2 ) exhibits a classic “ bell curve ” shape, with the x coordinate of its central peak given by µ, and the width of its peak controlled by σ. in this example, we depict the standard normal distribution, with and. µ = 0 σ = 1 first, many distributions we wish to model are truly close to being normal distributions. the central limit theorem shows that the sum of many indepen - dent random variables is approximately normally distributed. this means that in practice, many complicated systems can be modeled successfully as normally distributed noise, even if the system can be decomposed into parts with more structured behavior. second, out of all possible probability distributions with the same variance, the normal distribution encodes the maximum amount of uncertainty over the real numbers.
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
79
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
successfully as normally distributed noise, even if the system can be decomposed into parts with more structured behavior. second, out of all possible probability distributions with the same variance, the normal distribution encodes the maximum amount of uncertainty over the real numbers. we can thus think of the normal distribution as being the one that inserts the least amount of prior knowledge into a model. fully developing and justifying this idea requires more mathematical tools, and is postponed to section. 19. 4. 2 the normal distribution generalizes to rn, in which case it is known as the multivariate normal distribution. it may be parametrized with a positive definite symmetric matrix : σ n ( ; ) = x µ, σ 1 ( 2 ) π ndet ( ) σ exp −1 2 ( ) x µ − σ−1 ( ) x µ −. ( 3. 23 ) 64
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
79
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 3. probability and information theory the parameter µ still gives the mean of the distribution, though now it is vector - valued. the parameter σ gives the covariance matrix of the distribution. as in the univariate case, when we wish to evaluate the pdf several times for many [UNK] values of the parameters, the covariance is not a computationally [UNK] way to parametrize the distribution, since we need to invert σ to evaluate the pdf. we can instead use a : precision matrix β n ( ; x µ β, −1 ) = det ( ) β ( 2 ) π n exp −1 2 ( ) x µ − β x µ ( − ). ( 3. 24 ) we often fix the covariance matrix to be a diagonal matrix. an even simpler version is the isotropic gaussian distribution, whose covariance matrix is a scalar times the identity matrix. 3. 9. 4 exponential and laplace distributions in the context of deep learning, we often want to have a probability distribution with a sharp point at x = 0. to accomplish this, we can use the exponential distribution : p x λ λ ( ; ) = 1x≥0exp ( ) −λx. ( 3.
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
80
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
often want to have a probability distribution with a sharp point at x = 0. to accomplish this, we can use the exponential distribution : p x λ λ ( ; ) = 1x≥0exp ( ) −λx. ( 3. 25 ) the exponential distribution uses the indicator function 1x≥0 to assign probability zero to all negative values of. x a closely related probability distribution that allows us to place a sharp peak of probability mass at an arbitrary point is the µ laplace distribution laplace ( ; ) = x µ, γ 1 2γ exp − | − | x µ γ. ( 3. 26 ) 3. 9. 5 the dirac distribution and empirical distribution in some cases, we wish to specify that all of the mass in a probability distribution clusters around a single point. this can be accomplished by defining a pdf using the dirac delta function, : δ x ( ) p x δ x µ. ( ) = ( − ) ( 3. 27 ) the dirac delta function is defined such that it is zero - valued everywhere except 0, yet integrates to 1. the dirac delta function is not an ordinary function that associates each value x with a real - valued output, instead it is a [UNK] kind
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
80
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
function is defined such that it is zero - valued everywhere except 0, yet integrates to 1. the dirac delta function is not an ordinary function that associates each value x with a real - valued output, instead it is a [UNK] kind of 65
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
80
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 3. probability and information theory mathematical object called a generalized function that is defined in terms of its properties when integrated. we can think of the dirac delta function as being the limit point of a series of functions that put less and less mass on all points other than zero. by defining p ( x ) to be δ shifted by −µ we obtain an infinitely narrow and infinitely high peak of probability mass where. x µ = a common use of the dirac delta distribution is as a component of an empirical distribution, [UNK] ( ) = x 1 m m i = 1 δ ( x x − ( ) i ) ( 3. 28 ) which puts probability mass 1 m on each of the m points x ( 1 ),..., x ( ) m forming a given dataset or collection of samples. the dirac delta distribution is only necessary to define the empirical distribution over continuous variables. for discrete variables, the situation is simpler : an empirical distribution can be conceptualized as a multinoulli distribution, with a probability associated to each possible input value that is simply equal to the empirical frequency of that value in the training set. we can view the empirical distribution formed from a dataset of
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
81
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0