text
stringlengths 35
1.54k
| source
stringclasses 1
value | page
int64 1
800
| book
stringclasses 1
value | chunk_index
int64 0
0
|
|---|---|---|---|---|
think of each of these regions as a category or symbol : by having a separate degree of freedom for each symbol ( or region ), we can learn an arbitrary decoder mapping from symbol to value. however, this does not allow us to generalize to new symbols for new regions. if we are lucky, there may be some regularity in the target function, besides being smooth. for example, a convolutional network with max - pooling can recognize an object regardless of its location in the image, even though spatial translation of the object may not correspond to smooth transformations in the input space. let us examine a special case of a distributed representation learning algorithm, that extracts binary features by thresholding linear functions of the input. each binary feature in this representation divides rd into a pair of half - spaces, as illustrated in figure. the exponentially large number of intersections of 15. 7 n of the corresponding half - spaces determines how many regions this distributed representation learner can distinguish. how many regions are generated by an arrangement of n hyperplanes in rd? by applying a general result concerning the intersection of hyperplanes (, ), one can show ( zaslavsky 1975 pascanu 2014b et al., ) that the number of regions this
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 565
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
by an arrangement of n hyperplanes in rd? by applying a general result concerning the intersection of hyperplanes (, ), one can show ( zaslavsky 1975 pascanu 2014b et al., ) that the number of regions this binary feature representation can distinguish is d j = 0 n j = ( o nd ). ( 15. 4 ) therefore, we see a growth that is exponential in the input size and polynomial in the number of hidden units. 1potentially, we may want to learn a function whose behavior is distinct in exponentially many regions : in a d - dimensional space with at least 2 [UNK] values to distinguish per dimension, we might want to [UNK] in f 2d [UNK] regions, requiring o ( 2 d ) training examples. 550
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 565
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 15. representation learning this provides a geometric argument to explain the generalization power of distributed representation : with o ( nd ) parameters ( for n linear - threshold features in rd ) we can distinctly represent o ( nd ) regions in input space. if instead we made no assumption at all about the data, and used a representation with one unique symbol for each region, and separate parameters for each symbol to recognize its corresponding portion of rd, then specifying o ( nd ) regions would require o ( nd ) examples. more generally, the argument in favor of the distributed representation could be extended to the case where instead of using linear threshold units we use nonlinear, possibly continuous, feature extractors for each of the attributes in the distributed representation. the argument in this case is that if a parametric transformation with k parameters can learn about r regions in input space, with k r, and if obtaining such a representation was useful to the task of interest, then we could potentially generalize much better in this way than in a non - distributed setting where we would need o ( r ) examples to obtain the same features and associated partitioning of the input space into r regions. using fewer parameters to represent the model means that we have fewer parameters to fit, and thus
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 566
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
a non - distributed setting where we would need o ( r ) examples to obtain the same features and associated partitioning of the input space into r regions. using fewer parameters to represent the model means that we have fewer parameters to fit, and thus require far fewer training examples to generalize well. a further part of the argument for why models based on distributed represen - tations generalize well is that their capacity remains limited despite being able to distinctly encode so many [UNK] regions. for example, the vc dimension of a neural network of linear threshold units is only o ( w w log ), where w is the number of weights ( sontag 1998, ). this limitation arises because, while we can assign very many unique codes to representation space, we cannot use absolutely all of the code space, nor can we learn arbitrary functions mapping from the representation space h to the output y using a linear classifier. the use of a distributed representation combined with a linear classifier thus expresses a prior belief that the classes to be recognized are linearly separable as a function of the underlying causal factors captured by h. we will typically want to learn categories such as the set of all images of all green objects or the set of all images of cars,
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 566
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
that the classes to be recognized are linearly separable as a function of the underlying causal factors captured by h. we will typically want to learn categories such as the set of all images of all green objects or the set of all images of cars, but not categories that require nonlinear, xor logic. for example, we typically do not want to partition the data into the set of all red cars and green trucks as one class and the set of all green cars and red trucks as another class. the ideas discussed so far have been abstract, but they may be experimentally validated. ( ) find that hidden units in a deep convolutional network zhou et al. 2015 trained on the imagenet and places benchmark datasets learn features that are very often interpretable, corresponding to a label that humans would naturally assign. in practice it is certainly not always the case that hidden units learn something that has a simple linguistic name, but it is interesting to see this emerge near the top levels of the best computer vision deep networks. what such features have in 551
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 566
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 15. representation learning - + = figure 15. 9 : a generative model has learned a distributed representation that disentangles the concept of gender from the concept of wearing glasses. if we begin with the repre - sentation of the concept of a man with glasses, then subtract the vector representing the concept of a man without glasses, and finally add the vector representing the concept of a woman without glasses, we obtain the vector representing the concept of a woman with glasses. the generative model correctly decodes all of these representation vectors to images that may be recognized as belonging to the correct class. images reproduced with permission from ( ). radford et al. 2015 common is that one could imagine learning about each of them without having to see all the configurations of all the others. ( ) demonstrated that radford et al. 2015 a generative model can learn a representation of images of faces, with separate directions in representation space capturing [UNK] underlying factors of variation. figure demonstrates that one direction in representation space corresponds 15. 9 to whether the person is male or female, while another corresponds to whether the person is wearing glasses. these features were discovered automatically, not fixed a priori. there is no need to have labels for
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 567
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
one direction in representation space corresponds 15. 9 to whether the person is male or female, while another corresponds to whether the person is wearing glasses. these features were discovered automatically, not fixed a priori. there is no need to have labels for the hidden unit classifiers : gradient descent on an objective function of interest naturally learns semantically interesting features, so long as the task requires such features. we can learn about the distinction between male and female, or about the presence or absence of glasses, without having to characterize all of the configurations of the n −1 other features by examples covering all of these combinations of values. this form of statistical separability is what allows one to generalize to new configurations of a person ’ s features that have never been seen during training. 552
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 567
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 15. representation learning 15. 5 exponential gains from depth we have seen in section that multilayer perceptrons are universal approxima - 6. 4. 1 tors, and that some functions can be represented by exponentially smaller deep networks compared to shallow networks. this decrease in model size leads to improved statistical [UNK]. in this section, we describe how similar results apply more generally to other kinds of models with distributed hidden representations. in section, we saw an example of a generative model that learned about 15. 4 the explanatory factors underlying images of faces, including the person ’ s gender and whether they are wearing glasses. the generative model that accomplished this task was based on a deep neural network. it would not be reasonable to expect a shallow network, such as a linear network, to learn the complicated relationship between these abstract explanatory factors and the pixels in the image. in this and other ai tasks, the factors that can be chosen almost independently from each other yet still correspond to meaningful inputs are more likely to be very high - level and related in highly nonlinear ways to the input. we argue that this demands deep distributed representations, where the higher level features ( seen as functions of the input ) or factors ( seen as generative causes
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 568
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
inputs are more likely to be very high - level and related in highly nonlinear ways to the input. we argue that this demands deep distributed representations, where the higher level features ( seen as functions of the input ) or factors ( seen as generative causes ) are obtained through the composition of many nonlinearities. it has been proven in many [UNK] settings that organizing computation through the composition of many nonlinearities and a hierarchy of reused features can give an exponential boost to statistical [UNK], on top of the exponential boost given by using a distributed representation. many kinds of networks ( e. g., with saturating nonlinearities, boolean gates, sum / products, or rbf units ) with a single hidden layer can be shown to be universal approximators. a model family that is a universal approximator can approximate a large class of functions ( including all continuous functions ) up to any non - zero tolerance level, given enough hidden units. however, the required number of hidden units may be very large. theoretical results concerning the expressive power of deep architectures state that there are families of functions that can be represented [UNK] by an architecture of depth k, but would require an exponential number of hidden units ( with respect to the input size ) with [UNK] depth ( depth 2 or depth
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 568
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
expressive power of deep architectures state that there are families of functions that can be represented [UNK] by an architecture of depth k, but would require an exponential number of hidden units ( with respect to the input size ) with [UNK] depth ( depth 2 or depth ). k −1 in section, we saw that deterministic feedforward networks are universal 6. 4. 1 approximators of functions. many structured probabilistic models with a single hidden layer of latent variables, including restricted boltzmann machines and deep belief networks, are universal approximators of probability distributions ( le roux and bengio 2008 2010 montufar and ay 2011 montufar 2014 krause,, ;, ;, ; et al., 2013 ). 553
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 568
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 15. representation learning in section, we saw that a [UNK] deep feedforward network can have 6. 4. 1 an exponential advantage over a network that is too shallow. such results can also be obtained for other models such as probabilistic models. one such probabilistic model is the sum - product network or spn ( poon and domingos 2011, ). these models use polynomial circuits to compute the probability distribution over a set of random variables. ( ) showed that there exist delalleau and bengio 2011 probability distributions for which a minimum depth of spn is required to avoid needing an exponentially large model. later, ( ) martens and medabalimi 2014 showed that there are significant [UNK] between every two finite depths of spn, and that some of the constraints used to make spns tractable may limit their representational power. another interesting development is a set of theoretical results for the expressive power of families of deep circuits related to convolutional nets, highlighting an exponential advantage for the deep circuit even when the shallow circuit is allowed to only approximate the function computed by the deep circuit (, cohen et al. 2015 ). by comparison, previous theoretical work made claims regarding only the case
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 569
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##tional nets, highlighting an exponential advantage for the deep circuit even when the shallow circuit is allowed to only approximate the function computed by the deep circuit (, cohen et al. 2015 ). by comparison, previous theoretical work made claims regarding only the case where the shallow circuit must exactly replicate particular functions. 15. 6 providing clues to discover underlying causes to close this chapter, we come back to one of our original questions : what makes one representation better than another? one answer, first introduced in section, is 15. 3 that an ideal representation is one that disentangles the underlying causal factors of variation that generated the data, especially those factors that are relevant to our applications. most strategies for representation learning are based on introducing clues that help the learning to find these underlying factors of variations. the clues can help the learner separate these observed factors from the others. supervised learning provides a very strong clue : a label y, presented with each x, that usually specifies the value of at least one of the factors of variation directly. more generally, to make use of abundant unlabeled data, representation learning makes use of other, less direct, hints about the underlying factors. these hints take the form of implicit prior beliefs that we, the designers of
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 569
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
factors of variation directly. more generally, to make use of abundant unlabeled data, representation learning makes use of other, less direct, hints about the underlying factors. these hints take the form of implicit prior beliefs that we, the designers of the learning algorithm, impose in order to guide the learner. results such as the no free lunch theorem show that regularization strategies are necessary to obtain good generalization. while it is impossible to find a universally superior regularization strategy, one goal of deep learning is to find a set of fairly generic regularization strategies that are applicable to a wide variety of ai tasks, similar to the tasks that people and animals are able to solve. 554
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 569
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 15. representation learning we provide here a list of these generic regularization strategies. the list is clearly not exhaustive, but gives some concrete examples of ways that learning algorithms can be encouraged to discover features that correspond to underlying factors. this list was introduced in section 3. 1 of ( ) and has bengio et al. 2013d been partially expanded here. • smoothness : this is the assumption that f ( x + d ) ≈f ( x ) for unit d and small. this assumption allows the learner to generalize from training examples to nearby points in input space. many machine learning algorithms leverage this idea, but it is [UNK] to overcome the curse of dimensionality. • linearity : many learning algorithms assume that relationships between some variables are linear. this allows the algorithm to make predictions even very far from the observed data, but can sometimes lead to overly extreme predictions. most simple machine learning algorithms that do not make the smoothness assumption instead make the linearity assumption. these are in fact [UNK] assumptions — linear functions with large weights applied to high - dimensional spaces may not be very smooth. see goodfellow et al. ( ) for a further discussion of the limitations of the linearity assumption. 2014b • multiple explanatory factors : many representation
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 570
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
functions with large weights applied to high - dimensional spaces may not be very smooth. see goodfellow et al. ( ) for a further discussion of the limitations of the linearity assumption. 2014b • multiple explanatory factors : many representation learning algorithms are motivated by the assumption that the data is generated by multiple underlying explanatory factors, and that most tasks can be solved easily given the state of each of these factors. section describes how this view motivates semi - 15. 3 supervised learning via representation learning. learning the structure of p ( x ) requires learning some of the same features that are useful for modeling p ( y | x ) because both refer to the same underlying explanatory factors. section 15. 4 describes how this view motivates the use of distributed representations, with separate directions in representation space corresponding to separate factors of variation. • causal factors : the model is constructed in such a way that it treats the factors of variation described by the learned representation h as the causes of the observed data x, and not vice - versa. as discussed in section, this 15. 3 is advantageous for semi - supervised learning and makes the learned model more robust when the distribution over the underlying causes changes or when we use the model for a
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 570
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
observed data x, and not vice - versa. as discussed in section, this 15. 3 is advantageous for semi - supervised learning and makes the learned model more robust when the distribution over the underlying causes changes or when we use the model for a new task. • depth a hierarchical organization of explanatory factors, or : high - level, abstract concepts can be defined in terms of simple concepts, forming a hierarchy. from another point of view, the use of a deep architecture 555
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 570
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 15. representation learning expresses our belief that the task should be accomplished via a multi - step program, with each step referring back to the output of the processing accomplished via previous steps. • shared factors across tasks : in the context where we have many tasks, corresponding to [UNK] yi variables sharing the same input x or where each task is associated with a subset or a function f ( ) i ( x ) of a global input x, the assumption is that each yi is associated with a [UNK] subset from a common pool of relevant factors h. because these subsets overlap, learning all the p ( yi | x ) via a shared intermediate representation p ( h x | ) allows sharing of statistical strength between the tasks. • manifolds : probability mass concentrates, and the regions in which it con - centrates are locally connected and occupy a tiny volume. in the continuous case, these regions can be approximated by low - dimensional manifolds with a much smaller dimensionality than the original space where the data lives. many machine learning algorithms behave sensibly only on this manifold (, ). some machine learning algorithms, especially goodfellow et al. 2014b autoencoders, attempt to explicitly learn the structure of the manifold. • natural clustering : many machine learning
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 571
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
algorithms behave sensibly only on this manifold (, ). some machine learning algorithms, especially goodfellow et al. 2014b autoencoders, attempt to explicitly learn the structure of the manifold. • natural clustering : many machine learning algorithms assume that each connected manifold in the input space may be assigned to a single class. the data may lie on many disconnected manifolds, but the class remains constant within each one of these. this assumption motivates a variety of learning algorithms, including tangent propagation, double backprop, the manifold tangent classifier and adversarial training. • temporal and spatial coherence : slow feature analysis and related algorithms make the assumption that the most important explanatory factors change slowly over time, or at least that it is easier to predict the true underlying explanatory factors than to predict raw observations such as pixel values. see section for further description of this approach. 13. 3 • sparsity : most features should presumably not be relevant to describing most inputs — there is no need to use a feature that detects elephant trunks when representing an image of a cat. it is therefore reasonable to impose a prior that any feature that can be interpreted as “ present ” or “ absent ” should be
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 571
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
describing most inputs — there is no need to use a feature that detects elephant trunks when representing an image of a cat. it is therefore reasonable to impose a prior that any feature that can be interpreted as “ present ” or “ absent ” should be absent most of the time. • simplicity of factor dependencies : in good high - level representations, the factors are related to each other through simple dependencies. the simplest 556
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 571
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 15. representation learning possible is marginal independence, p ( h ) = ip ( hi ), but linear dependencies or those captured by a shallow autoencoder are also reasonable assumptions. this can be seen in many laws of physics, and is assumed when plugging a linear predictor or a factorized prior on top of a learned representation. the concept of representation learning ties together all of the many forms of deep learning. feedforward and recurrent networks, autoencoders and deep probabilistic models all learn and exploit representations. learning the best possible representation remains an exciting avenue of research. 557
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 572
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 16 structured probabilistic models for deep learning deep learning draws upon many modeling formalisms that researchers can use to guide their design [UNK] and describe their algorithms. one of these formalisms is the idea of structured probabilistic models. we have already discussed structured probabilistic models briefly in section. that brief presentation was 3. 14 [UNK] to understand how to use structured probabilistic models as a language to describe some of the algorithms in part. now, in part, structured probabilistic ii iii models are a key ingredient of many of the most important research topics in deep learning. in order to prepare to discuss these research ideas, this chapter describes structured probabilistic models in much greater detail. this chapter is intended to be self - contained ; the reader does not need to review the earlier introduction before continuing with this chapter. a structured probabilistic model is a way of describing a probability distribution, using a graph to describe which random variables in the probability distribution interact with each other directly. here we use “ graph ” in the graph theory sense — a set of vertices connected to one another by a set of edges. because the structure of the model is defined by a graph, these models are often also referred
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 573
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
each other directly. here we use “ graph ” in the graph theory sense — a set of vertices connected to one another by a set of edges. because the structure of the model is defined by a graph, these models are often also referred to as graphical models. the graphical models research community is large and has developed many [UNK] models, training algorithms, and inference algorithms. in this chapter, we provide basic background on some of the most central ideas of graphical models, with an emphasis on the concepts that have proven most useful to the deep learning research community. if you already have a strong background in graphical models, you may wish to skip most of this chapter. however, even a graphical model expert 558
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 573
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 16. structured probabilistic models for deep learning may benefit from reading the final section of this chapter, section, in which we 16. 7 highlight some of the unique ways that graphical models are used for deep learning algorithms. deep learning practitioners tend to use very [UNK] model structures, learning algorithms and inference procedures than are commonly used by the rest of the graphical models research community. in this chapter, we identify these [UNK] in preferences and explain the reasons for them. in this chapter we first describe the challenges of building large - scale proba - bilistic models. next, we describe how to use a graph to describe the structure of a probability distribution. while this approach allows us to overcome many challenges, it is not without its own complications. one of the major [UNK] in graphical modeling is understanding which variables need to be able to interact directly, i. e., which graph structures are most suitable for a given problem. we outline two approaches to resolving this [UNK] by learning about the dependen - cies in section. finally, we close with a discussion of the unique emphasis that 16. 5 deep learning practitioners place on specific approaches to graphical modeling in section. 16. 7 16. 1 the challenge of unstructured modeling the
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 574
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
cies in section. finally, we close with a discussion of the unique emphasis that 16. 5 deep learning practitioners place on specific approaches to graphical modeling in section. 16. 7 16. 1 the challenge of unstructured modeling the goal of deep learning is to scale machine learning to the kinds of challenges needed to solve artificial intelligence. this means being able to understand high - dimensional data with rich structure. for example, we would like ai algorithms to be able to understand natural images, 1 audio waveforms representing speech, and documents containing multiple words and punctuation characters. classification algorithms can take an input from such a rich high - dimensional distribution and summarize it with a categorical label — what object is in a photo, what word is spoken in a recording, what topic a document is about. the process of classification discards most of the information in the input and produces a single output ( or a probability distribution over values of that single output ). the classifier is also often able to ignore many parts of the input. for example, when recognizing an object in a photo, it is usually possible to ignore the background of the photo. it is possible to ask probabilistic models to
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 574
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
classifier is also often able to ignore many parts of the input. for example, when recognizing an object in a photo, it is usually possible to ignore the background of the photo. it is possible to ask probabilistic models to do many other tasks. these tasks are often more expensive than classification. some of them require producing multiple output values. most require a complete understanding of the entire structure of 1 a natural image is an image that might be captured by a camera in a reasonably ordinary environment, as opposed to a synthetically rendered image, a screenshot of a web page, etc. 559
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 574
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 16. structured probabilistic models for deep learning the input, with no option to ignore sections of it. these tasks include the following : • density estimation : given an input x, the machine learning system returns an estimate of the true density p ( x ) under the data generating distribution. this requires only a single output, but it does require a complete understand - ing of the entire input. if even one element of the vector is unusual, the system must assign it a low probability. • denoising : given a damaged or incorrectly observed input [UNK], the machine learning system returns an estimate of the original or correct x. for example, the machine learning system might be asked to remove dust or scratches from an old photograph. this requires multiple outputs ( every element of the estimated clean example x ) and an understanding of the entire input ( since even one damaged area will still reveal the final estimate as being damaged ). • missing value imputation : given the observations of some elements of x, the model is asked to return estimates of or a probability distribution over some or all of the unobserved elements of x. this requires multiple outputs. because the model could be asked to restore any of the elements of x, it must understand the entire input. • sampling
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 575
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
to return estimates of or a probability distribution over some or all of the unobserved elements of x. this requires multiple outputs. because the model could be asked to restore any of the elements of x, it must understand the entire input. • sampling : the model generates new samples from the distribution p ( x ). applications include speech synthesis, i. e. producing new waveforms that sound like natural human speech. this requires multiple output values and a good model of the entire input. if the samples have even one element drawn from the wrong distribution, then the sampling process is wrong. for an example of a sampling task using small natural images, see figure. 16. 1 modeling a rich distribution over thousands or millions of random variables is a challenging task, both computationally and statistically. suppose we only wanted to model binary variables. this is the simplest possible case, and yet already it seems overwhelming. for a small, 32 ×32 2 pixel color ( rgb ) image, there are 3072 possible binary images of this form. this number is over 10800 times larger than the estimated number of atoms in the universe. in general, if we wish to model a distribution over a random vector x containing n discrete variables capable of taking on k values each,
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 575
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
images of this form. this number is over 10800 times larger than the estimated number of atoms in the universe. in general, if we wish to model a distribution over a random vector x containing n discrete variables capable of taking on k values each, then the naive approach of representing p ( x ) by storing a lookup table with one probability value per possible outcome requires kn parameters! this is not feasible for several reasons : 560
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 575
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 16. structured probabilistic models for deep learning figure 16. 1 : probabilistic modeling of natural images. ( top ) example 32× 32 pixel color images from the cifar - 10 dataset (, ). samples krizhevsky and hinton 2009 ( bottom ) drawn from a structured probabilistic model trained on this dataset. each sample appears at the same position in the grid as the training example that is closest to it in euclidean space. this comparison allows us to see that the model is truly synthesizing new images, rather than memorizing the training data. contrast of both sets of images has been adjusted for display. figure reproduced with permission from ( ). courville et al. 2011 561
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 576
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 16. structured probabilistic models for deep learning • memory : the cost of storing the representation : for all but very small values of n and k, representing the distribution as a table will require too many values to store. • statistical [UNK] : as the number of parameters in a model increases, so does the amount of training data needed to choose the values of those parameters using a statistical estimator. because the table - based model has an astronomical number of parameters, it will require an astronomically large training set to fit accurately. any such model will overfit the training set very badly unless additional assumptions are made linking the [UNK] entries in the table ( for example, like in back - [UNK] smoothed n - gram models, section ). 12. 4. 1 • runtime : the cost of inference : suppose we want to perform an inference task where we use our model of the joint distribution p ( x ) to compute some other distribution, such as the marginal distribution p ( x1 ) or the conditional distribution p ( x2 | x1 ). computing these distributions will require summing across the entire table, so the runtime of these operations is as high as the intractable memory cost of storing the model. • runtime : the cost of
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 577
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
distribution p ( x2 | x1 ). computing these distributions will require summing across the entire table, so the runtime of these operations is as high as the intractable memory cost of storing the model. • runtime : the cost of sampling : likewise, suppose we want to draw a sample from the model. the naive way to do this is to sample some value u [UNK] ( 0, 1 ), then iterate through the table, adding up the probability values until they exceed u and return the outcome corresponding to that position in the table. this requires reading through the whole table in the worst case, so it has the same exponential cost as the other operations. the problem with the table - based approach is that we are explicitly modeling every possible kind of interaction between every possible subset of variables. the probability distributions we encounter in real tasks are much simpler than this. usually, most variables influence each other only indirectly. for example, consider modeling the finishing times of a team in a relay race. suppose the team consists of three runners : alice, bob and carol. at the start of the race, alice carries a baton and begins running around a track. after completing her lap around the track, she hands the baton to bob. bob then
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 577
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
. suppose the team consists of three runners : alice, bob and carol. at the start of the race, alice carries a baton and begins running around a track. after completing her lap around the track, she hands the baton to bob. bob then runs his own lap and hands the baton to carol, who runs the final lap. we can model each of their finishing times as a continuous random variable. alice ’ s finishing time does not depend on anyone else ’ s, since she goes first. bob ’ s finishing time depends on alice ’ s, because bob does not have the opportunity to start his lap until alice has completed hers. if alice finishes faster, bob will finish faster, all else being 562
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 577
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 16. structured probabilistic models for deep learning equal. finally, carol ’ s finishing time depends on both her teammates. if alice is slow, bob will probably finish late too. as a consequence, carol will have quite a late starting time and thus is likely to have a late finishing time as well. however, carol ’ s finishing time depends only indirectly on alice ’ s finishing time via bob ’ s. if we already know bob ’ s finishing time, we will not be able to estimate carol ’ s finishing time better by finding out what alice ’ s finishing time was. this means we can model the relay race using only two interactions : alice ’ s [UNK] on bob and bob ’ s [UNK] on carol. we can omit the third, indirect interaction between alice and carol from our model. structured probabilistic models provide a formal framework for modeling only direct interactions between random variables. this allows the models to have significantly fewer parameters and therefore be estimated reliably from less data. these smaller models also have dramatically reduced computational cost in terms of storing the model, performing inference in the model, and drawing samples from the model. 16. 2 using graphs
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 578
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##ficantly fewer parameters and therefore be estimated reliably from less data. these smaller models also have dramatically reduced computational cost in terms of storing the model, performing inference in the model, and drawing samples from the model. 16. 2 using graphs to describe model structure structured probabilistic models use graphs ( in the graph theory sense of “ nodes ” or “ vertices ” connected by edges ) to represent interactions between random variables. each node represents a random variable. each edge represents a direct interaction. these direct interactions imply other, indirect interactions, but only the direct interactions need to be explicitly modeled. there is more than one way to describe the interactions in a probability distribution using a graph. in the following sections we describe some of the most popular and useful approaches. graphical models can be largely divided into two categories : models based on directed acyclic graphs, and models based on undirected graphs. 16. 2. 1 directed models one kind of structured probabilistic model is the directed graphical model, otherwise known as the belief network bayesian network or 2 ( pearl 1985, ). directed graphical models are called “ directed ” because their edges are directed, 2 judea pearl suggested using the term “ bayesian network ” when one wishes to “ emphasize the
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 578
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
known as the belief network bayesian network or 2 ( pearl 1985, ). directed graphical models are called “ directed ” because their edges are directed, 2 judea pearl suggested using the term “ bayesian network ” when one wishes to “ emphasize the judgmental ” nature of the values computed by the network, i. e. to highlight that they usually represent degrees of belief rather than frequencies of events. 563
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 578
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 16. structured probabilistic models for deep learning t0t0 t1t1 t2t2 alice bob carol figure 16. 2 : a directed graphical model depicting the relay race example. alice ’ s finishing time t0 influences bob ’ s finishing time t1, because bob does not get to start running until alice finishes. likewise, carol only gets to start running after bob finishes, so bob ’ s finishing time t1 directly influences carol ’ s finishing time t2. that is, they point from one vertex to another. this direction is represented in the drawing with an arrow. the direction of the arrow indicates which variable ’ s probability distribution is defined in terms of the other ’ s. drawing an arrow from a to b means that we define the probability distribution over b via a conditional distribution, with a as one of the variables on the right side of the conditioning bar. in other words, the distribution over b depends on the value of a. continuing with the relay race example from section, suppose we name 16. 1 alice ’ s finishing time t0, bob ’ s finishing time t1, and carol ’
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 579
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
, the distribution over b depends on the value of a. continuing with the relay race example from section, suppose we name 16. 1 alice ’ s finishing time t0, bob ’ s finishing time t1, and carol ’ s finishing time t2. as we saw earlier, our estimate of t1 depends on t0. our estimate of t2 depends directly on t1 but only indirectly on t0. we can draw this relationship in a directed graphical model, illustrated in figure. 16. 2 formally, a directed graphical model defined on variables x is defined by a directed acyclic graph g whose vertices are the random variables in the model, and a set of local conditional probability distributions p ( xi | pag ( xi ) ) where pag ( xi ) gives the parents of xi in g. the probability distribution over x is given by p ( ) = π x ip ( xi | pag ( xi ) ). ( 16. 1 ) in our relay race example, this means that, using the graph drawn in figure, 16. 2 p ( t0, t1, t2 ) = ( p t0 ) ( p t 1 | t0 ) ( p
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 579
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
) in our relay race example, this means that, using the graph drawn in figure, 16. 2 p ( t0, t1, t2 ) = ( p t0 ) ( p t 1 | t0 ) ( p t2 | t1 ). ( 16. 2 ) this is our first time seeing a structured probabilistic model in action. we can examine the cost of using it, in order to observe how structured modeling has many advantages relative to unstructured modeling. suppose we represented time by discretizing time ranging from minute 0 to minute 10 into 6 second chunks. this would make t0, t1 and t2 each be a discrete variable with 100 possible values. if we attempted to represent p ( t0, t1, t2 ) with a table, it would need to store 999, 999 values ( 100 values of t0 × 100 values of t1 × 100 values of t2, minus 1, since the probability of one of the configurations is made 564
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 579
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 16. structured probabilistic models for deep learning redundant by the constraint that the sum of the probabilities be 1 ). if instead, we only make a table for each of the conditional probability distributions, then the distribution over t0 requires 99 values, the table defining t1 given t0 requires 9900 values, and so does the table defining t2 given t1. this comes to a total of 19, 899 values. this means that using the directed graphical model reduced our number of parameters by a factor of more than 50! in general, to model n discrete variables each having k values, the cost of the single table approach scales like o ( k n ), as we have observed before. now suppose we build a directed graphical model over these variables. if m is the maximum number of variables appearing ( on either side of the conditioning bar ) in a single conditional probability distribution, then the cost of the tables for the directed model scales like o ( km ). as long as we can design a model such that m < < n, we get very dramatic savings. in other words, so long as each variable has few parents in the graph, the distribution can be represented with very few parameters. some restrictions on the graph
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 580
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
as we can design a model such that m < < n, we get very dramatic savings. in other words, so long as each variable has few parents in the graph, the distribution can be represented with very few parameters. some restrictions on the graph structure, such as requiring it to be a tree, can also guarantee that operations like computing marginal or conditional distributions over subsets of variables are [UNK]. it is important to realize what kinds of information can and cannot be encoded in the graph. the graph encodes only simplifying assumptions about which variables are conditionally independent from each other. it is also possible to make other kinds of simplifying assumptions. for example, suppose we assume bob always runs the same regardless of how alice performed. ( in reality, alice ’ s performance probably influences bob ’ s performance — depending on bob ’ s personality, if alice runs especially fast in a given race, this might encourage bob to push hard and match her exceptional performance, or it might make him overconfident and lazy ). then the only [UNK] alice has on bob ’ s finishing time is that we must add alice ’ s finishing time to the total amount of time we think bob needs to run. this observation allows us to defi
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 580
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
and lazy ). then the only [UNK] alice has on bob ’ s finishing time is that we must add alice ’ s finishing time to the total amount of time we think bob needs to run. this observation allows us to define a model with o ( k ) parameters instead of o ( k2 ). however, note that t0 and t1 are still directly dependent with this assumption, because t1 represents the absolute time at which bob finishes, not the total time he himself spends running. this means our graph must still contain an arrow from t0 to t1. the assumption that bob ’ s personal running time is independent from all other factors cannot be encoded in a graph over t0, t1, and t2. instead, we encode this information in the definition of the conditional distribution itself. the conditional distribution is no longer a k k × −1 element table indexed by t0 and t1 but is now a slightly more complicated formula using only k −1 parameters. the directed graphical model syntax does not place any constraint on how we define 565
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 580
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 16. structured probabilistic models for deep learning our conditional distributions. it only defines which variables they are allowed to take in as arguments. 16. 2. 2 undirected models directed graphical models give us one language for describing structured probabilis - tic models. another popular language is that of undirected models, otherwise known as markov random fields ( mrfs ) or markov networks ( kinder - mann 1980, ). as their name implies, undirected models use graphs whose edges are undirected. directed models are most naturally applicable to situations where there is a clear reason to draw each arrow in one particular direction. often these are situations where we understand the causality and the causality only flows in one direction. one such situation is the relay race example. earlier runners [UNK] the finishing times of later runners ; later runners do not [UNK] the finishing times of earlier runners. not all situations we might want to model have such a clear direction to their interactions. when the interactions seem to have no intrinsic direction, or to operate in both directions, it may be more appropriate to use an undirected model. as an example of such a situation, suppose we want to model a distribution over
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 581
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
to their interactions. when the interactions seem to have no intrinsic direction, or to operate in both directions, it may be more appropriate to use an undirected model. as an example of such a situation, suppose we want to model a distribution over three binary variables : whether or not you are sick, whether or not your coworker is sick, and whether or not your roommate is sick. as in the relay race example, we can make simplifying assumptions about the kinds of interactions that take place. assuming that your coworker and your roommate do not know each other, it is very unlikely that one of them will give the other an infection such as a cold directly. this event can be seen as so rare that it is acceptable not to model it. however, it is reasonably likely that either of them could give you a cold, and that you could pass it on to the other. we can model the indirect transmission of a cold from your coworker to your roommate by modeling the transmission of the cold from your coworker to you and the transmission of the cold from you to your roommate. in this case, it is just as easy for you to cause your roommate to get sick as it is for your roommate to make you sick, so there
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 581
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
from your coworker to you and the transmission of the cold from you to your roommate. in this case, it is just as easy for you to cause your roommate to get sick as it is for your roommate to make you sick, so there is not a clean, uni - directional narrative on which to base the model. this motivates using an undirected model. as with directed models, if two nodes in an undirected model are connected by an edge, then the random variables corresponding to those nodes interact with each other directly. unlike directed models, the edge in an undirected model has no arrow, and is not associated with a conditional probability distribution. 566
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 581
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 16. structured probabilistic models for deep learning hrhr hyhy hchc figure 16. 3 : an undirected graph representing how your roommate ’ s healthhr, your health hy, and your work colleague ’ s health hc [UNK] each other. you and your roommate might infect each other with a cold, and you and your work colleague might do the same, but assuming that your roommate and your colleague do not know each other, they can only infect each other indirectly via you. we denote the random variable representing your health as hy, the random variable representing your roommate ’ s health as hr, and the random variable representing your colleague ’ s health as hc. see figure for a drawing of the 16. 3 graph representing this scenario. formally, an undirected graphical model is a structured probabilistic model defined on an undirected graph g. for each clique c in the graph, 3 a factor φ ( c ) ( also called a clique potential ) measures the [UNK] of the variables in that clique for being in each of their possible joint states. the factors are constrained to be non - negative. together they define an unnormalized probability distribution [UNK] ( ) = π
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 582
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##ique potential ) measures the [UNK] of the variables in that clique for being in each of their possible joint states. the factors are constrained to be non - negative. together they define an unnormalized probability distribution [UNK] ( ) = π x c∈g φ. ( ) c ( 16. 3 ) the unnormalized probability distribution is [UNK] to work with so long as all the cliques are small. it encodes the idea that states with higher [UNK] are more likely. however, unlike in a bayesian network, there is little structure to the definition of the cliques, so there is nothing to guarantee that multiplying them together will yield a valid probability distribution. see figure for an example 16. 4 of reading factorization information from an undirected graph. our example of the cold spreading between you, your roommate, and your colleague contains two cliques. one clique contains h y and hc. the factor for this clique can be defined by a table, and might have values resembling these : hy = 0 hy = 1 hc = 0 2 1 hc = 1 1 10 3a clique of the graph is a subset of nodes that are all connected to each other by an edge of
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 582
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
a table, and might have values resembling these : hy = 0 hy = 1 hc = 0 2 1 hc = 1 1 10 3a clique of the graph is a subset of nodes that are all connected to each other by an edge of the graph. 567
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 582
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 16. structured probabilistic models for deep learning a state of 1 indicates good health, while a state of 0 indicates poor health ( having been infected with a cold ). both of you are usually healthy, so the corresponding state has the highest [UNK]. the state where only one of you is sick has the lowest [UNK], because this is a rare state. the state where both of you are sick ( because one of you has infected the other ) is a higher [UNK] state, though still not as common as the state where both are healthy. to complete the model, we would need to also define a similar factor for the clique containing hy and hr. 16. 2. 3 the partition function while the unnormalized probability distribution is guaranteed to be non - negative everywhere, it is not guaranteed to sum or integrate to 1. to obtain a valid probability distribution, we must use the corresponding normalized probability distribution : 4 p ( ) = x 1 z [UNK] ( ) x ( 16. 4 ) where z is the value that results in the probability distribution summing or integrating to 1 : z = [UNK] d. ( ) x x ( 16. 5 ) you can think of z as a constant when the φ functions are held constant. note
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 583
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
) where z is the value that results in the probability distribution summing or integrating to 1 : z = [UNK] d. ( ) x x ( 16. 5 ) you can think of z as a constant when the φ functions are held constant. note that if the φ functions have parameters, then z is a function of those parameters. it is common in the literature to write z with its arguments omitted to save space. the normalizing constant z is known as the partition function, a term borrowed from statistical physics. since z is an integral or sum over all possible joint assignments of the state x it is often intractable to compute. in order to be able to obtain the normalized probability distribution of an undirected model, the model structure and the definitions of the φ functions must be conducive to computing z [UNK]. in the context of deep learning, z is usually intractable. due to the intractability of computing z exactly, we must resort to approximations. such approximate algorithms are the topic of chapter. 18 one important consideration to keep in mind when designing undirected models is that it is possible to specify the factors in such a way that z does not exist. this happens if some of the variables in the model are continuous and
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 583
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
topic of chapter. 18 one important consideration to keep in mind when designing undirected models is that it is possible to specify the factors in such a way that z does not exist. this happens if some of the variables in the model are continuous and the integral 4a distribution defined by normalizing a product of clique potentials is also called a gibbs distribution. 568
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 583
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 16. structured probabilistic models for deep learning of [UNK] over their domain diverges. for example, suppose we want to model a single scalar variable x with a single clique potential ∈r φ x x ( ) = 2. in this case, z = x2dx. ( 16. 6 ) since this integral diverges, there is no probability distribution corresponding to this choice of φ ( x ). sometimes the choice of some parameter of the φ functions determines whether the probability distribution is defined. for example, for φ ( x ; β ) = exp −βx2, the β parameter determines whether z exists. positive β results in a gaussian distribution over x but all other values of β make φ impossible to normalize. one key [UNK] between directed modeling and undirected modeling is that directed models are defined directly in terms of probability distributions from the start, while undirected models are defined more loosely by φ functions that are then converted into probability distributions. this changes the intuitions one must develop in order to work with these models. one key idea to keep in mind while working with undirected models is that the domain of each of the variables has dramatic [UNK] on the kind of probability distribution
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 584
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
distributions. this changes the intuitions one must develop in order to work with these models. one key idea to keep in mind while working with undirected models is that the domain of each of the variables has dramatic [UNK] on the kind of probability distribution that a given set of φ functions corresponds to. for example, consider an n - dimensional vector - valued random variable x and an undirected model parametrized by a vector of biases b. suppose we have one clique for each element of x, φ ( ) i ( xi ) = exp ( bixi ). what kind of probability distribution does this result in? the answer is that we do not have enough information, because we have not yet specified the domain of x. if x ∈rn, then the integral defining z diverges and no probability distribution exists. if x ∈ { 0, 1 } n, then p ( x ) factorizes into n independent distributions, with p ( xi = 1 ) = sigmoid ( bi ). if the domain of x is the set of elementary basis vectors ( { [ 1, 0,..., 0 ], [ 0, 1,..., 0 ],..., [ 0, 0
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 584
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
( bi ). if the domain of x is the set of elementary basis vectors ( { [ 1, 0,..., 0 ], [ 0, 1,..., 0 ],..., [ 0, 0,..., 1 ] } ) then p ( x ) = softmax ( b ), so a large value of bi actually reduces p ( xj = 1 ) for j = i. often, it is possible to leverage the [UNK] of a carefully chosen domain of a variable in order to obtain complicated behavior from a relatively simple set of φ functions. we will explore a practical application of this idea later, in section. 20. 6 16. 2. 4 energy - based models many interesting theoretical results about undirected models depend on the as - sumption that [UNK], [UNK] ( x ) > 0. a convenient way to enforce this condition is to use an ( ebm ) where energy - based model [UNK] e ( ) = exp ( x − ( ) ) x ( 16. 7 ) 569
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 584
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 16. structured probabilistic models for deep learning a b c d e f figure 16. 4 : this graph implies that p ( a b c d e f,,,,, ) can be written as 1 z φa b, ( a b, ) φb c, ( b c, ) φa d, ( a d, ) φb e, ( b e, ) φe f, ( e f, ) for an appropriate choice of the φ func - tions. and e ( x ) is known as the energy function. because exp ( z ) is positive for all z, this guarantees that no energy function will result in a probability of zero for any state x. being completely free to choose the energy function makes learning simpler. if we learned the clique potentials directly, we would need to use constrained optimization to arbitrarily impose some specific minimal probability value. by learning the energy function, we can use unconstrained optimization. 5 the probabilities in an energy - based model can approach arbitrarily close to zero but never reach it. any distribution of the form given by equation is an example of a 16. 7 boltz - mann distribution. for this reason
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 585
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
5 the probabilities in an energy - based model can approach arbitrarily close to zero but never reach it. any distribution of the form given by equation is an example of a 16. 7 boltz - mann distribution. for this reason, many energy - based models are called boltzmann machines ( fahlman 1983 ackley 1985 hinton et al., ; et al., ; et al., 1984 hinton and sejnowski 1986 ;, ). there is no accepted guideline for when to call a model an energy - based model and when to call it a boltzmann machine. the term boltzmann machine was first introduced to describe a model with exclusively binary variables, but today many models such as the mean - covariance restricted boltzmann machine incorporate real - valued variables as well. while boltzmann machines were originally defined to encompass both models with and without la - tent variables, the term boltzmann machine is today most often used to designate models with latent variables, while boltzmann machines without latent variables are more often called markov random fields or log - linear models. cliques in an undirected graph correspond to factors of the unnormalized probability function.
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 585
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
with latent variables, while boltzmann machines without latent variables are more often called markov random fields or log - linear models. cliques in an undirected graph correspond to factors of the unnormalized probability function. because exp ( a ) exp ( b ) = exp ( a + b ), this means that [UNK] cliques in the undirected graph correspond to the [UNK] terms of the energy function. in other words, an energy - based model is just a special kind of markov network : the exponentiation makes each term in the energy function correspond to a factor for a [UNK] clique. see figure for an example of how to read the 16. 5 5for some models, we may still need to use constrained optimization to make sure exists. z 570
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 585
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 16. structured probabilistic models for deep learning a b c d e f figure 16. 5 : this graph implies that e ( a b c d e f,,,,, ) can be written as ea b, ( a b, ) + eb c, ( b c, ) + ea d, ( a d, ) + eb e, ( b e, ) + ee f, ( e f, ) for an appropriate choice of the per - clique energy functions. note that we can obtain the φ functions in figure by setting each 16. 4 φ to the exponential of the corresponding negative energy, e. g., φa b, ( a b, ) = exp ( ( ) ) −e a b,. form of the energy function from an undirected graph structure. one can view an energy - based model with multiple terms in its energy function as being a product of experts ( hinton 1999, ). each term in the energy function corresponds to another factor in the probability distribution. each term of the energy function can be thought of as an “ expert ” that determines whether a particular soft constraint is satisfied. each expert may enforce only one constraint that concerns only a low -
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 586
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
corresponds to another factor in the probability distribution. each term of the energy function can be thought of as an “ expert ” that determines whether a particular soft constraint is satisfied. each expert may enforce only one constraint that concerns only a low - dimensional projection of the random variables, but when combined by multiplication of probabilities, the experts together enforce a complicated high - dimensional constraint. one part of the definition of an energy - based model serves no functional purpose from a machine learning point of view : the −sign in equation. this 16. 7 −sign could be incorporated into the definition of e. for many choices of the function e, the learning algorithm is free to determine the sign of the energy anyway. the −sign is present primarily to preserve compatibility between the machine learning literature and the physics literature. many advances in probabilistic modeling were originally developed by statistical physicists, for whom e refers to actual, physical energy and does not have arbitrary sign. terminology such as “ energy ” and “ partition function ” remains associated with these techniques, even though their mathematical applicability is broader than the physics context in which they were developed. some machine learning researchers ( e. g., ( ), who smole
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 586
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
as “ energy ” and “ partition function ” remains associated with these techniques, even though their mathematical applicability is broader than the physics context in which they were developed. some machine learning researchers ( e. g., ( ), who smolensky 1986 referred to negative energy as harmony ) have chosen to emit the negation, but this is not the standard convention. many algorithms that operate on probabilistic models do not need to compute pmodel ( x ) but only log [UNK] ( x ). for energy - based models with latent variables h, these algorithms are sometimes phrased in terms of the negative of this quantity, 571
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 586
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 16. structured probabilistic models for deep learning a s b a s b ( a ) ( b ) figure 16. 6 : ( a ) the path between random variablea and random variable b through s is active, because s is not observed. this means that a and b are not separated. ( b ) here s is shaded in, to indicate that it is observed. because the only path betweena and b is through s, and that path is inactive, we can conclude that a and b are separated given s. called the : free energy f − ( ) = x log h exp ( ( ) ) −e x h,. ( 16. 8 ) in this book, we usually prefer the more general log [UNK] ( ) x formulation. 16. 2. 5 separation and d - separation the edges in a graphical model tell us which variables directly interact. we often need to know which variables indirectly interact. some of these indirect interactions can be enabled or disabled by observing other variables. more formally, we would like to know which subsets of variables are conditionally independent from each other, given the values of other subsets of variables. identifying the conditional independences in a graph is very simple in the case of undirected models. in
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 587
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
, we would like to know which subsets of variables are conditionally independent from each other, given the values of other subsets of variables. identifying the conditional independences in a graph is very simple in the case of undirected models. in this case, conditional independence implied by the graph is called separation. we say that a set of variables a is separated from another set of variables b given a third set of variables s if the graph structure implies that a is independent from b given s. if two variables a and b are connected by a path involving only unobserved variables, then those variables are not separated. if no path exists between them, or all paths contain an observed variable, then they are separated. we refer to paths involving only unobserved variables as “ active ” and paths including an observed variable as “ inactive. ” when we draw a graph, we can indicate observed variables by shading them in. see figure for a depiction of how active and inactive paths in an undirected 16. 6 model look when drawn in this way. see figure for an example of reading 16. 7 separation from an undirected graph. similar concepts apply to directed models, except that in the context of directed models, these concepts are referred
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 587
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
. 6 model look when drawn in this way. see figure for an example of reading 16. 7 separation from an undirected graph. similar concepts apply to directed models, except that in the context of directed models, these concepts are referred to as d - separation. the “ d ” stands for “ dependence. ” d - separation for directed graphs is defined the same as separation 572
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 587
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 16. structured probabilistic models for deep learning a b c d figure 16. 7 : an example of reading separation properties from an undirected graph. here b is shaded to indicate that it is observed. because observingb blocks the only path from a to c, we say that a and c are separated from each other given b. the observation of b also blocks one path between a and d, but there is a second, active path between them. therefore, a and d are not separated given b. for undirected graphs : we say that a set of variables a is d - separated from another set of variables b given a third set of variables s if the graph structure implies that is independent from given. a b s as with undirected models, we can examine the independences implied by the graph by looking at what active paths exist in the graph. as before, two variables are dependent if there is an active path between them, and d - separated if no such path exists. in directed nets, determining whether a path is active is somewhat more complicated. see figure for a guide to identifying active paths in a 16. 8 directed model. see figure for an example of reading some properties from a 16. 9 graph
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 588
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
directed nets, determining whether a path is active is somewhat more complicated. see figure for a guide to identifying active paths in a 16. 8 directed model. see figure for an example of reading some properties from a 16. 9 graph. it is important to remember that separation and d - separation tell us only about those conditional independences that are implied by the graph. there is no requirement that the graph imply all independences that are present. in particular, it is always legitimate to use the complete graph ( the graph with all possible edges ) to represent any distribution. in fact, some distributions contain independences that are not possible to represent with existing graphical notation. context - specific independences are independences that are present dependent on the value of some variables in the network. for example, consider a model of three binary variables : a, b and c. suppose that when a is 0, b and c are independent, but when a is 1, b is deterministically equal to c. encoding the behavior when a = 1 requires an edge connecting b and c. the graph then fails to indicate that b and c are independent when a. = 0 in general, a graph will never imply that an independence exists when it does not.
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 588
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
. encoding the behavior when a = 1 requires an edge connecting b and c. the graph then fails to indicate that b and c are independent when a. = 0 in general, a graph will never imply that an independence exists when it does not. however, a graph may fail to encode an independence. 573
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 588
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 16. structured probabilistic models for deep learning a s b a s b a s b a s b a s b c ( a ) ( b ) ( c ) ( d ) figure 16. 8 : all of the kinds of active paths of length two that can exist between random variables a and b. any path with arrows proceeding directly from ( a ) a to b or vice versa. this kind of path becomes blocked if s is observed. we have already seen this kind of path in the relay race example. ( b ) a and b are connected by a common cause s. for example, suppose s is a variable indicating whether or not there is a hurricane and a and b measure the wind speed at two [UNK] nearby weather monitoring outposts. if we observe very high winds at station a, we might expect to also see high winds at b. this kind of path can be blocked by observing s. if we already know there is a hurricane, we expect to see high winds at b, regardless of what is observed at a. a lower than expected wind at a ( for a hurricane ) would not change our expectation of winds atb ( knowing there is a hurricane ). however, if s is not observed, then a and b are dependent,
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 589
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
what is observed at a. a lower than expected wind at a ( for a hurricane ) would not change our expectation of winds atb ( knowing there is a hurricane ). however, if s is not observed, then a and b are dependent, i. e., the path is active. ( c ) a and b are both parents of s. this is called a v - structure or the collider case. the v - structure causes a and b to be related by the explaining away [UNK]. in this case, the path is actually active when s is observed. for example, suppose s is a variable indicating that your colleague is not at work. the variablea represents her being sick, while b represents her being on vacation. if you observe that she is not at work, you can presume she is probably sick or on vacation, but it is not especially likely that both have happened at the same time. if you find out that she is on vacation, this fact is [UNK] to her absence. you can infer that she is probably not also explain sick. the explaining away [UNK] happens even if any descendant of ( d ) s is observed! for example, suppose that c is a variable representing whether you have received a report from your colleague.
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 589
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
you can infer that she is probably not also explain sick. the explaining away [UNK] happens even if any descendant of ( d ) s is observed! for example, suppose that c is a variable representing whether you have received a report from your colleague. if you notice that you have not received the report, this increases your estimate of the probability that she is not at work today, which in turn makes it more likely that she is either sick or on vacation. the only way to block a path through a v - structure is to observe none of the descendants of the shared child. 574
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 589
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 16. structured probabilistic models for deep learning a b c d e figure 16. 9 : from this graph, we can read out several d - separation properties. examples include : • a and b are d - separated given the empty set. • a and e are d - separated given c. • d and e are d - separated given c. we can also see that some variables are no longer d - separated when we observe some variables : • a and b are not d - separated given c. • a and b are not d - separated given d. 575
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 590
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 16. structured probabilistic models for deep learning 16. 2. 6 converting between undirected and directed graphs we often refer to a specific machine learning model as being undirected or directed. for example, we typically refer to rbms as undirected and sparse coding as directed. this choice of wording can be somewhat misleading, because no probabilistic model is inherently directed or undirected. instead, some models are most easily described using a directed graph, or most easily described using an undirected graph. directed models and undirected models both have their advantages and disad - vantages. neither approach is clearly superior and universally preferred. instead, we should choose which language to use for each task. this choice will partially depend on which probability distribution we wish to describe. we may choose to use either directed modeling or undirected modeling based on which approach can capture the most independences in the probability distribution or which approach uses the fewest edges to describe the distribution. there are other factors that can [UNK] the decision of which language to use. even while working with a single probability distribution, we may sometimes switch between [UNK] modeling languages. sometimes a [UNK] language becomes more appropriate if we observe a certain subset of variables
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 591
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
distribution. there are other factors that can [UNK] the decision of which language to use. even while working with a single probability distribution, we may sometimes switch between [UNK] modeling languages. sometimes a [UNK] language becomes more appropriate if we observe a certain subset of variables, or if we wish to perform a [UNK] computational task. for example, the directed model description often provides a straightforward approach to [UNK] draw samples from the model ( described in section ) 16. 3 while the undirected model formulation is often useful for deriving approximate inference procedures ( as we will see in chapter, where the role of undirected 19 models is highlighted in equation ). 19. 56 every probability distribution can be represented by either a directed model or by an undirected model. in the worst case, one can always represent any distribution by using a “ complete graph. ” in the case of a directed model, the complete graph is any directed acyclic graph where we impose some ordering on the random variables, and each variable has all other variables that precede it in the ordering as its ancestors in the graph. for an undirected model, the complete graph is simply a graph containing a single clique encompassing all of the variables. see figure for an example. 16. 10 of
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 591
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
precede it in the ordering as its ancestors in the graph. for an undirected model, the complete graph is simply a graph containing a single clique encompassing all of the variables. see figure for an example. 16. 10 of course, the utility of a graphical model is that the graph implies that some variables do not interact directly. the complete graph is not very useful because it does not imply any independences. when we represent a probability distribution with a graph, we want to choose a graph that implies as many independences as possible, without implying any independences that do not actually exist. from this point of view, some distributions can be represented more [UNK] 576
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 591
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 16. structured probabilistic models for deep learning figure 16. 10 : examples of complete graphs, which can describe any probability distribution. here we show examples with four random variables. ( left ) the complete undirected graph. in the undirected case, the complete graph is unique. a complete directed graph. ( right ) in the directed case, there is not a unique complete graph. we choose an ordering of the variables and draw an arc from each variable to every variable that comes after it in the ordering. there are thus a factorial number of complete graphs for every set of random variables. in this example we order the variables from left to right, top to bottom. using directed models, while other distributions can be represented more [UNK] using undirected models. in other words, directed models can encode some independences that undirected models cannot encode, and vice versa. directed models are able to use one specific kind of substructure that undirected models cannot represent perfectly. this substructure is called an immorality. the structure occurs when two random variables a and b are both parents of a third random variable c, and there is no edge directly connecting a and b in either direction. (
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 592
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
represent perfectly. this substructure is called an immorality. the structure occurs when two random variables a and b are both parents of a third random variable c, and there is no edge directly connecting a and b in either direction. ( the name “ immorality ” may seem strange ; it was coined in the graphical models literature as a joke about unmarried parents. ) to convert a directed model with graph d into an undirected model, we need to create a new graph u. for every pair of variables x and y, we add an undirected edge connecting x and y to u if there is a directed edge ( in either direction ) connecting x and y in d or if x and y are both parents in d of a third variable z. the resulting u is known as a moralized graph. see figure for examples of converting directed models to 16. 11 undirected models via moralization. likewise, undirected models can include substructures that no directed model can represent perfectly. specifically, a directed graph cannot capture all of the d conditional independences implied by an undirected graph u if u contains a loop of length greater than three, unless that loop also contains a chord. a
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 592
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
can represent perfectly. specifically, a directed graph cannot capture all of the d conditional independences implied by an undirected graph u if u contains a loop of length greater than three, unless that loop also contains a chord. a loop is a sequence of variables connected by undirected edges, with the last variable in the sequence connected back to the first variable in the sequence. a chord is a connection between any two non - consecutive variables in the sequence defining a loop. if u has loops of length four or greater and does not have chords for these loops, we must add the chords before we can convert it to a directed model. adding 577
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 592
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 16. structured probabilistic models for deep learning h1h1 h2h2 h3h3 v1 v1 v2 v2 v3 v3 a b c a c b h1h1 h2h2 h3h3 v1 v1 v2 v2 v3 v3 a b c a c b figure 16. 11 : examples of converting directed models ( top row ) to undirected models ( bottom row ) by constructing moralized graphs. ( left ) this simple chain can be converted to a moralized graph merely by replacing its directed edges with undirected edges. the resulting undirected model implies exactly the same set of independences and conditional independences. this graph is the simplest directed model that cannot be converted ( center ) to an undirected model without losing some independences. this graph consists entirely of a single immorality. because a and b are parents of c, they are connected by an active path when c is observed. to capture this dependence, the undirected model must include a clique encompassing all three variables. this clique fails to encode the fact thata b [UNK]. ( right ) in general, moralization may add many edges to the graph, thus losing
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 593
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
this dependence, the undirected model must include a clique encompassing all three variables. this clique fails to encode the fact thata b [UNK]. ( right ) in general, moralization may add many edges to the graph, thus losing many implied independences. for example, this sparse coding graph requires adding moralizing edges between every pair of hidden units, thus introducing a quadratic number of new direct dependences. 578
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 593
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 16. structured probabilistic models for deep learning a b d c a b d c a b d c figure 16. 12 : converting an undirected model to a directed model. ( left ) this undirected model cannot be converted directed to a directed model because it has a loop of length four with no chords. specifically, the undirected model encodes two [UNK] independences that no directed model can capture simultaneously : a c b d [UNK] | {, } and b d a c [UNK] | {, }. to ( center ) convert the undirected model to a directed model, we must triangulate the graph, by ensuring that all loops of greater than length three have a chord. to do so, we can either add an edge connecting a and c or we can add an edge connecting b and d. in this example, we choose to add the edge connectinga and c. to finish the conversion ( right ) process, we must assign a direction to each edge. when doing so, we must not create any directed cycles. one way to avoid directed cycles is to impose an ordering over the nodes, and always point each edge from the node that comes earlier in the ordering to the node that comes later
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 594
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
each edge. when doing so, we must not create any directed cycles. one way to avoid directed cycles is to impose an ordering over the nodes, and always point each edge from the node that comes earlier in the ordering to the node that comes later in the ordering. in this example, we use the variable names to impose alphabetical order. these chords discards some of the independence information that was encoded in u. the graph formed by adding chords to u is known as a chordal or triangulated graph, because all the loops can now be described in terms of smaller, triangular loops. to build a directed graph d from the chordal graph, we need to also assign directions to the edges. when doing so, we must not create a directed cycle in d, or the result does not define a valid directed probabilistic model. one way to assign directions to the edges in d is to impose an ordering on the random variables, then point each edge from the node that comes earlier in the ordering to the node that comes later in the ordering. see figure for a demonstration. 16. 12 16. 2. 7 factor graphs factor graphs are another way of drawing undirected models that resolve an ambiguity in the graphical representation of standard und
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 594
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
to the node that comes later in the ordering. see figure for a demonstration. 16. 12 16. 2. 7 factor graphs factor graphs are another way of drawing undirected models that resolve an ambiguity in the graphical representation of standard undirected model syntax. in an undirected model, the scope of every φ function must be a of some clique subset in the graph. ambiguity arises because it is not clear if each clique actually has a corresponding factor whose scope encompasses the entire clique — for example, a clique containing three nodes may correspond to a factor over all three nodes, or may correspond to three factors that each contain only a pair of the nodes. 579
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 594
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 16. structured probabilistic models for deep learning factor graphs resolve this ambiguity by explicitly representing the scope of each φ function. specifically, a factor graph is a graphical representation of an undirected model that consists of a bipartite undirected graph. some of the nodes are drawn as circles. these nodes correspond to random variables as in a standard undirected model. the rest of the nodes are drawn as squares. these nodes correspond to the factors φ of the unnormalized probability distribution. variables and factors may be connected with undirected edges. a variable and a factor are connected in the graph if and only if the variable is one of the arguments to the factor in the unnormalized probability distribution. no factor may be connected to another factor in the graph, nor can a variable be connected to a variable. see figure 16. 13 for an example of how factor graphs can resolve ambiguity in the interpretation of undirected networks. a b c a b c f1f1 a b c f1f1 f2f2 f3f3 figure 16. 13 : an example of how a factor graph can resolve ambiguity in the interpretation of undirected networks. ( left ) an undirected network
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 595
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
c f1f1 a b c f1f1 f2f2 f3f3 figure 16. 13 : an example of how a factor graph can resolve ambiguity in the interpretation of undirected networks. ( left ) an undirected network with a clique involving three variables : a, b and c. a factor graph corresponding to the same undirected model. this ( center ) factor graph has one factor over all three variables. another valid factor graph ( right ) for the same undirected model. this factor graph has three factors, each over only two variables. representation, inference, and learning are all asymptotically cheaper in this factor graph than in the factor graph depicted in the center, even though both require the same undirected graph to represent. 16. 3 sampling from graphical models graphical models also facilitate the task of drawing samples from a model. one advantage of directed graphical models is that a simple and [UNK] proce - dure called ancestral sampling can produce a sample from the joint distribution represented by the model. the basic idea is to sort the variables xi in the graph into a topological ordering, so that for all i and j, j is greater than i if xi is a parent of xj. the variables 580
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 595
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 16. structured probabilistic models for deep learning can then be sampled in this order. in other words, we first sample x1 [UNK] ( x1 ), then sample p ( x2 | pag ( x2 ) ), and so on, until finally we sample p ( xn | pag ( xn ) ). so long as each conditional distribution p ( xi | pag ( xi ) ) is easy to sample from, then the whole model is easy to sample from. the topological sorting operation guarantees that we can read the conditional distributions in equation and 16. 1 sample from them in order. without the topological sorting, we might attempt to sample a variable before its parents are available. for some graphs, more than one topological ordering is possible. ancestral sampling may be used with any of these topological orderings. ancestral sampling is generally very fast ( assuming sampling from each condi - tional is easy ) and convenient. one drawback to ancestral sampling is that it only applies to directed graphical models. another drawback is that it does not support every conditional sampling operation. when we wish to sample from a subset of the variables in a directed graphical model, given some other variables, we often require that all the condition - ing variables
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 596
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
directed graphical models. another drawback is that it does not support every conditional sampling operation. when we wish to sample from a subset of the variables in a directed graphical model, given some other variables, we often require that all the condition - ing variables come earlier than the variables to be sampled in the ordered graph. in this case, we can sample from the local conditional probability distributions specified by the model distribution. otherwise, the conditional distributions we need to sample from are the posterior distributions given the observed variables. these posterior distributions are usually not explicitly specified and parametrized in the model. inferring these posterior distributions can be costly. in models where this is the case, ancestral sampling is no longer [UNK]. unfortunately, ancestral sampling is applicable only to directed models. we can sample from undirected models by converting them to directed models, but this often requires solving intractable inference problems ( to determine the marginal distribution over the root nodes of the new directed graph ) or requires introducing so many edges that the resulting directed model becomes intractable. sampling from an undirected model without first converting it to a directed model seems to require resolving cyclical dependencies. every variable interacts with every other variable, so there is no clear beginning point for
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 596
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
the resulting directed model becomes intractable. sampling from an undirected model without first converting it to a directed model seems to require resolving cyclical dependencies. every variable interacts with every other variable, so there is no clear beginning point for the sampling process. unfortunately, drawing samples from an undirected graphical model is an expensive, multi - pass process. the conceptually simplest approach is gibbs sampling. suppose we have a graphical model over an n - dimensional vector of random variables x. we iteratively visit each variable xi and draw a sample conditioned on all of the other variables, from p ( xi | x−i ). due to the separation properties of the graphical model, we can equivalently condition on only the neighbors of xi. unfortunately, after we have made one pass through the graphical model and sampled all n variables, we still do not have a fair sample from p ( x ). instead, we must repeat the 581
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 596
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 16. structured probabilistic models for deep learning process and resample all n variables using the updated values of their neighbors. asymptotically, after many repetitions, this process converges to sampling from the correct distribution. it can be [UNK] to determine when the samples have reached a [UNK] accurate approximation of the desired distribution. sampling techniques for undirected models are an advanced topic, covered in more detail in chapter. 17 16. 4 advantages of structured modeling the primary advantage of using structured probabilistic models is that they allow us to dramatically reduce the cost of representing probability distributions as well as learning and inference. sampling is also accelerated in the case of directed models, while the situation can be complicated with undirected models. the primary mechanism that allows all of these operations to use less runtime and memory is choosing to not model certain interactions. graphical models convey information by leaving edges out. anywhere there is not an edge, the model specifies the assumption that we do not need to model a direct interaction. a less quantifiable benefit of using structured probabilistic models is that they allow us to explicitly separate representation of knowledge from learning of knowledge or inference given existing knowledge. this makes our models easier to
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 597
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
model a direct interaction. a less quantifiable benefit of using structured probabilistic models is that they allow us to explicitly separate representation of knowledge from learning of knowledge or inference given existing knowledge. this makes our models easier to develop and debug. we can design, analyze, and evaluate learning algorithms and inference algorithms that are applicable to broad classes of graphs. independently, we can design models that capture the relationships we believe are important in our data. we can then combine these [UNK] algorithms and structures and obtain a cartesian product of [UNK] possibilities. it would be much more [UNK] to design end - to - end algorithms for every possible situation. 16. 5 learning about dependencies a good generative model needs to accurately capture the distribution over the observed or “ visible ” variables v. often the [UNK] elements of v are highly dependent on each other. in the context of deep learning, the approach most commonly used to model these dependencies is to introduce several latent or “ hidden ” variables, h. the model can then capture dependencies between any pair of variables vi and vj indirectly, via direct dependencies between vi and h, and direct dependencies between and v h j. a good model of v which did not contain any latent variables
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 597
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
the model can then capture dependencies between any pair of variables vi and vj indirectly, via direct dependencies between vi and h, and direct dependencies between and v h j. a good model of v which did not contain any latent variables would need to 582
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 597
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 16. structured probabilistic models for deep learning have very large numbers of parents per node in a bayesian network or very large cliques in a markov network. just representing these higher order interactions is costly — both in a computational sense, because the number of parameters that must be stored in memory scales exponentially with the number of members in a clique, but also in a statistical sense, because this exponential number of parameters requires a wealth of data to estimate accurately. when the model is intended to capture dependencies between visible variables with direct connections, it is usually infeasible to connect all variables, so the graph must be designed to connect those variables that are tightly coupled and omit edges between other variables. an entire field of machine learning called structure learning is devoted to this problem for a good reference on structure learning, see ( koller and friedman 2009, ). most structure learning techniques are a form of greedy search. a structure is proposed, a model with that structure is trained, then given a score. the score rewards high training set accuracy and penalizes model complexity. candidate structures with a small number of edges added or removed are then proposed as the next step of the search. the search proceeds to a new structure that is expected to increase
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 598
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
a score. the score rewards high training set accuracy and penalizes model complexity. candidate structures with a small number of edges added or removed are then proposed as the next step of the search. the search proceeds to a new structure that is expected to increase the score. using latent variables instead of adaptive structure avoids the need to perform discrete searches and multiple rounds of training. a fixed structure over visible and hidden variables can use direct interactions between visible and hidden units to impose indirect interactions between visible units. using simple parameter learning techniques we can learn a model with a fixed structure that imputes the right structure on the marginal. p ( ) v latent variables have advantages beyond their role in [UNK] capturing p ( v ). the new variables h also provide an alternative representation for v. for example, as discussed in section, the mixture of gaussians model learns a latent variable 3. 9. 6 that corresponds to which category of examples the input was drawn from. this means that the latent variable in a mixture of gaussians model can be used to do classification. in chapter we saw how simple probabilistic models like sparse 14 coding learn latent variables that can be used as input features for a classifier, or as
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 598
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
of gaussians model can be used to do classification. in chapter we saw how simple probabilistic models like sparse 14 coding learn latent variables that can be used as input features for a classifier, or as coordinates along a manifold. other models can be used in this same way, but deeper models and models with [UNK] kinds of interactions can create even richer descriptions of the input. many approaches accomplish feature learning by learning latent variables. often, given some model of v and h, experimental observations show that e [ h v | ] or argmaxhp ( h v, ) is a good feature mapping for v. 583
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 598
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 16. structured probabilistic models for deep learning 16. 6 inference and approximate inference one of the main ways we can use a probabilistic model is to ask questions about how variables are related to each other. given a set of medical tests, we can ask what disease a patient might have. in a latent variable model, we might want to extract features e [ h v | ] describing the observed variables v. sometimes we need to solve such problems in order to perform other tasks. we often train our models using the principle of maximum likelihood. because log ( ) = p v eh h [UNK] ( | v ) [ log ( ) log ( ) ] p h v, − p h v |, ( 16. 9 ) we often want to compute p ( h | v ) in order to implement a learning rule. all of these are examples of inference problems in which we must predict the value of some variables given other variables, or predict the probability distribution over some variables given the value of other variables. unfortunately, for most interesting deep models, these inference problems are intractable, even when we use a structured graphical model to simplify them. the graph structure allows us to represent complicated, high - dimensional distributions with a reasonable number of parameters, but
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 599
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
. unfortunately, for most interesting deep models, these inference problems are intractable, even when we use a structured graphical model to simplify them. the graph structure allows us to represent complicated, high - dimensional distributions with a reasonable number of parameters, but the graphs used for deep learning are usually not restrictive enough to also allow [UNK] inference. it is straightforward to see that computing the marginal probability of a general graphical model is # p hard. the complexity class # p is a generalization of the complexity class np. problems in np require determining only whether a problem has a solution and finding a solution if one exists. problems in # p require counting the number of solutions. to construct a worst - case graphical model, imagine that we define a graphical model over the binary variables in a 3 - sat problem. we can impose a uniform distribution over these variables. we can then add one binary latent variable per clause that indicates whether each clause is satisfied. we can then add another latent variable indicating whether all of the clauses are satisfied. this can be done without making a large clique, by building a reduction tree of latent variables, with each node in the tree reporting whether two other variables are satisfied. the leaves of
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 599
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
of the clauses are satisfied. this can be done without making a large clique, by building a reduction tree of latent variables, with each node in the tree reporting whether two other variables are satisfied. the leaves of this tree are the variables for each clause. the root of the tree reports whether the entire problem is satisfied. due to the uniform distribution over the literals, the marginal distribution over the root of the reduction tree specifies what fraction of assignments satisfy the problem. while this is a contrived worst - case example, np hard graphs commonly arise in practical real - world scenarios. this motivates the use of approximate inference. in the context of deep learning, this usually refers to variational inference, in which we approximate the 584
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 599
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 16. structured probabilistic models for deep learning true distribution p ( h | v ) by seeking an approximate distribution q ( h v | ) that is as close to the true one as possible. this and other techniques are described in depth in chapter. 19 16. 7 the deep learning approach to structured prob - abilistic models deep learning practitioners generally use the same basic computational tools as other machine learning practitioners who work with structured probabilistic models. however, in the context of deep learning, we usually make [UNK] design decisions about how to combine these tools, resulting in overall algorithms and models that have a very [UNK] flavor from more traditional graphical models. deep learning does not always involve especially deep graphical models. in the context of graphical models, we can define the depth of a model in terms of the graphical model graph rather than the computational graph. we can think of a latent variable hi as being at depth j if the shortest path from hi to an observed variable is j steps. we usually describe the depth of the model as being the greatest depth of any such hi. this kind of depth is [UNK] from the depth induced by the computational graph. many generative models used for deep learning have no latent variables or only one layer
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 600
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
we usually describe the depth of the model as being the greatest depth of any such hi. this kind of depth is [UNK] from the depth induced by the computational graph. many generative models used for deep learning have no latent variables or only one layer of latent variables, but use deep computational graphs to define the conditional distributions within a model. deep learning essentially always makes use of the idea of distributed represen - tations. even shallow models used for deep learning purposes ( such as pretraining shallow models that will later be composed to form deep ones ) nearly always have a single, large layer of latent variables. deep learning models typically have more latent variables than observed variables. complicated nonlinear interactions between variables are accomplished via indirect connections that flow through multiple latent variables. by contrast, traditional graphical models usually contain mostly variables that are at least occasionally observed, even if many of the variables are missing at random from some training examples. traditional models mostly use higher - order terms and structure learning to capture complicated nonlinear interactions between variables. if there are latent variables, they are usually few in number. the way that latent variables are designed also [UNK] in deep learning. the deep learning practitioner typically does not intend for the latent variables to take on any
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 600
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
nonlinear interactions between variables. if there are latent variables, they are usually few in number. the way that latent variables are designed also [UNK] in deep learning. the deep learning practitioner typically does not intend for the latent variables to take on any specific semantics ahead of time — the training algorithm is free to invent the concepts it needs to model a particular dataset. the latent variables are 585
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 600
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 16. structured probabilistic models for deep learning usually not very easy for a human to interpret after the fact, though visualization techniques may allow some rough characterization of what they represent. when latent variables are used in the context of traditional graphical models, they are often designed with some specific semantics in mind — the topic of a document, the intelligence of a student, the disease causing a patient ’ s symptoms, etc. these models are often much more interpretable by human practitioners and often have more theoretical guarantees, yet are less able to scale to complex problems and are not reusable in as many [UNK] contexts as deep models. another obvious [UNK] is the kind of connectivity typically used in the deep learning approach. deep graphical models typically have large groups of units that are all connected to other groups of units, so that the interactions between two groups may be described by a single matrix. traditional graphical models have very few connections and the choice of connections for each variable may be individually designed. the design of the model structure is tightly linked with the choice of inference algorithm. traditional approaches to graphical models typically aim to maintain the tractability of exact inference. when this constraint is too limiting, a popular approximate inference algorithm is an algorithm called loopy belief propagation. both
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 601
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
model structure is tightly linked with the choice of inference algorithm. traditional approaches to graphical models typically aim to maintain the tractability of exact inference. when this constraint is too limiting, a popular approximate inference algorithm is an algorithm called loopy belief propagation. both of these approaches often work well with very sparsely connected graphs. by comparison, models used in deep learning tend to connect each visible unit vi to very many hidden units hj, so that h can provide a distributed representation of vi ( and probably several other observed variables too ). distributed representations have many advantages, but from the point of view of graphical models and computational complexity, distributed representations have the disadvantage of usually yielding graphs that are not sparse enough for the traditional techniques of exact inference and loopy belief propagation to be relevant. as a consequence, one of the most striking [UNK] between the larger graphical models community and the deep graphical models community is that loopy belief propagation is almost never used for deep learning. most deep models are instead designed to make gibbs sampling or variational inference algorithms [UNK]. another consideration is that deep learning models contain a very large number of latent variables, making [UNK] numerical code essential. this provides an additional motivation, besides the choice of high - level inference algorithm, for grouping the units into layers with a matrix describing
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 601
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
. another consideration is that deep learning models contain a very large number of latent variables, making [UNK] numerical code essential. this provides an additional motivation, besides the choice of high - level inference algorithm, for grouping the units into layers with a matrix describing the interaction between two layers. this allows the individual steps of the algorithm to be implemented with [UNK] matrix product operations, or sparsely connected generalizations, like block diagonal matrix products or convolutions. finally, the deep learning approach to graphical modeling is characterized by a marked tolerance of the unknown. rather than simplifying the model until all quantities we might want can be computed exactly, we increase the power of 586
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 601
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.