text
stringlengths
35
1.54k
source
stringclasses
1 value
page
int64
1
800
book
stringclasses
1 value
chunk_index
int64
0
0
a model that can accommodate variable length inputs and variable length outputs. an rnn provides this ability. section describes several ways 10. 2. 4 of constructing an rnn that represents a conditional distribution over a sequence given some input, and section describes how to accomplish this conditioning 10. 4 when the input is a sequence. in all cases, one model first reads the input sequence and emits a data structure that summarizes the input sequence. we call this summary the “ context ” c. the context c may be a list of vectors, or it may be a vector or tensor. the model that reads the input to produce c may be an rnn (, ; cho et al. 2014a sutskever 2014 jean 2014 et al., ; et al., ) or a convolutional network ( kalchbrenner and blunsom 2013, ). a second model, usually an rnn, then reads the context c and generates a sentence in the target language. this general idea of an encoder - decoder framework for machine translation is illustrated in figure. 12. 5 in order to generate an entire sentence conditioned on the source sentence, the model must have a way to represent the entire source sentence. earlier models
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
489
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
of an encoder - decoder framework for machine translation is illustrated in figure. 12. 5 in order to generate an entire sentence conditioned on the source sentence, the model must have a way to represent the entire source sentence. earlier models were only able to represent individual words or phrases. from a representation 474
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
489
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 12. applications learning point of view, it can be useful to learn a representation in which sentences that have the same meaning have similar representations regardless of whether they were written in the source language or the target language. this strategy was explored first using a combination of convolutions and rnns ( kalchbrenner and blunsom 2013, ). later work introduced the use of an rnn for scoring proposed translations (, ) and for generating translated sentences ( cho et al. 2014a sutskever et al. et al., ). 2014 jean ( ) scaled these models to larger vocabularies. 2014 12. 4. 5. 1 using an attention mechanism and aligning pieces of data α ( t−1 ) α ( t−1 ) α ( ) t α ( ) t α ( + 1 ) t α ( + 1 ) t h ( t−1 ) h ( t−1 ) h ( ) t h ( ) t h ( + 1 ) t h ( + 1 ) t c × × × + figure 12. 6 : a modern attention mechanism, as introduced by ( ), is bahdanau et al. 2015 essentially a weighted average. a context vectorc is formed by taking a weighted average of feature vectors h
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
490
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
c × × × + figure 12. 6 : a modern attention mechanism, as introduced by ( ), is bahdanau et al. 2015 essentially a weighted average. a context vectorc is formed by taking a weighted average of feature vectors h ( ) t with weights α ( ) t. in some applications, the feature vectorsh are hidden units of a neural network, but they may also be raw input to the model. the weights α ( ) t are produced by the model itself. they are usually values in the interval [ 0, 1 ] and are intended to concentrate around just one h ( ) t so that the weighted average approximates reading that one specific time step precisely. the weightsα ( ) t are usually produced by applying a softmax function to relevance scores emitted by another portion of the model. the attention mechanism is more expensive computationally than directly indexing the desired h ( ) t, but direct indexing cannot be trained with gradient descent. the attention mechanism based on weighted averages is a smooth, [UNK] approximation that can be trained with existing optimization algorithms. using a fixed - size representation to capture all the semantic details of a very long sentence of say 60 words is very [UNK]. it can be achieved by training a [UNK] large rn
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
490
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
a smooth, [UNK] approximation that can be trained with existing optimization algorithms. using a fixed - size representation to capture all the semantic details of a very long sentence of say 60 words is very [UNK]. it can be achieved by training a [UNK] large rnn well enough and for long enough, as demonstrated by cho et al. ( ) and 2014a sutskever 2014 et al. ( ). however, a more [UNK] approach is to read the whole sentence or paragraph ( to get the context and the gist of what 475
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
490
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 12. applications is being expressed ), then produce the translated words one at a time, each time focusing on a [UNK] part of the input sentence in order to gather the semantic details that are required to produce the next output word. that is exactly the idea that ( ) first introduced. the attention mechanism used bahdanau et al. 2015 to focus on specific parts of the input sequence at each time step is illustrated in figure. 12. 6 we can think of an attention - based system as having three components : 1. a process that “ reads ” raw data ( such as source words in a source sentence ), and converts them into distributed representations, with one feature vector associated with each word position. 2. a list of feature vectors storing the output of the reader. this can be understood as a “ ” containing a sequence of facts, which can be memory retrieved later, not necessarily in the same order, without having to visit all of them. 3. a process that “ ” the content of the memory to sequentially perform exploits a task, at each time step having the ability put attention on the content of one memory element ( or a few, with a [UNK] weight ). the third component generates the translated sentence. when words
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
491
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
the content of the memory to sequentially perform exploits a task, at each time step having the ability put attention on the content of one memory element ( or a few, with a [UNK] weight ). the third component generates the translated sentence. when words in a sentence written in one language are aligned with correspond - ing words in a translated sentence in another language, it becomes possible to relate the corresponding word embeddings. earlier work showed that one could learn a kind of translation matrix relating the word embeddings in one language with the word embeddings in another ( kocisky 2014 et al., ), yielding lower alignment error rates than traditional approaches based on the frequency counts in the phrase table. there is even earlier work on learning cross - lingual word vectors ( klementiev et al., 2012 ). many extensions to this approach are possible. for example, more [UNK] cross - lingual alignment (, ) allows training on larger datasets. gouws et al. 2014 12. 4. 6 historical perspective the idea of distributed representations for symbols was introduced by rumelhart et al. ( ) in one of the first explorations of back - propagation, with symbols 1986a corresponding to the identity of family members
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
491
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
2014 12. 4. 6 historical perspective the idea of distributed representations for symbols was introduced by rumelhart et al. ( ) in one of the first explorations of back - propagation, with symbols 1986a corresponding to the identity of family members and the neural network capturing the relationships between family members, with training examples forming triplets such as ( colin, mother, victoria ). the first layer of the neural network learned a representation of each family member. for example, the features for colin 476
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
491
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 12. applications might represent which family tree colin was in, what branch of that tree he was in, what generation he was from, etc. one can think of the neural network as computing learned rules relating these attributes together in order to obtain the desired predictions. the model can then make predictions such as inferring who is the mother of colin. the idea of forming an embedding for a symbol was extended to the idea of an embedding for a word by deerwester 1990 et al. ( ). these embeddings were learned using the svd. later, embeddings would be learned by neural networks. the history of natural language processing is marked by transitions in the popularity of [UNK] ways of representing the input to the model. following this early work on symbols or words, some of the earliest applications of neural networks to nlp (, ; miikkulainen and dyer 1991 schmidhuber 1996, ) represented the input as a sequence of characters. bengio 2001 et al. ( ) returned the focus to modeling words and introduced neural language models, which produce interpretable word embeddings. these neural models have scaled up from defining representations of a small set of symbols in the 1980s to millions of words
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
492
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
al. ( ) returned the focus to modeling words and introduced neural language models, which produce interpretable word embeddings. these neural models have scaled up from defining representations of a small set of symbols in the 1980s to millions of words ( including proper nouns and misspellings ) in modern applications. this computational scaling [UNK] led to the invention of the techniques described above in section. 12. 4. 3 initially, the use of words as the fundamental units of language models yielded improved language modeling performance (, ). to this day, bengio et al. 2001 new techniques continually push both character - based models ( sutskever et al., 2011 ) and word - based models forward, with recent work (, ) even gillick et al. 2015 modeling individual bytes of unicode characters. the ideas behind neural language models have been extended into several natural language processing applications, such as parsing (,, ; henderson 2003 2004 collobert 2011, ), part - of - speech tagging, semantic role labeling, chunking, etc, sometimes using a single multi - task learning architecture ( collobert and weston, 2008a collobert 2011a ; et al., ) in which the word embeddings are shared across tasks. two
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
492
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
labeling, chunking, etc, sometimes using a single multi - task learning architecture ( collobert and weston, 2008a collobert 2011a ; et al., ) in which the word embeddings are shared across tasks. two - dimensional visualizations of embeddings became a popular tool for an - alyzing language models following the development of the t - sne dimensionality reduction algorithm ( van der maaten and hinton 2008, ) and its high - profile appli - cation to visualization word embeddings by joseph turian in 2009. 477
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
492
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 12. applications 12. 5 other applications in this section we cover a few other types of applications of deep learning that are [UNK] from the standard object recognition, speech recognition and natural language processing tasks discussed above. part of this book will expand that iii scope even further to tasks that remain primarily research areas. 12. 5. 1 recommender systems one of the major families of applications of machine learning in the information technology sector is the ability to make recommendations of items to potential users or customers. two major types of applications can be distinguished : online advertising and item recommendations ( often these recommendations are still for the purpose of selling a product ). both rely on predicting the association between a user and an item, either to predict the probability of some action ( the user buying the product, or some proxy for this action ) or the expected gain ( which may depend on the value of the product ) if an ad is shown or a recommendation is made regarding that product to that user. the internet is currently financed in great part by various forms of online advertising. there are major parts of the economy that rely on online shopping. companies including amazon and ebay use machine learning, including deep learning, for their product recommendations. sometimes, the items are not products that are actually for sale.
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
493
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
forms of online advertising. there are major parts of the economy that rely on online shopping. companies including amazon and ebay use machine learning, including deep learning, for their product recommendations. sometimes, the items are not products that are actually for sale. examples include selecting posts to display on social network news feeds, recommending movies to watch, recommending jokes, recommending advice from experts, matching players for video games, or matching people in dating services. often, this association problem is handled like a supervised learning problem : given some information about the item and about the user, predict the proxy of interest ( user clicks on ad, user enters a rating, user clicks on a “ like ” button, user buys product, user spends some amount of money on the product, user spends time visiting a page for the product, etc ). this often ends up being either a regression problem ( predicting some conditional expected value ) or a probabilistic classification problem ( predicting the conditional probability of some discrete event ). the early work on recommender systems relied on minimal information as inputs for these predictions : the user id and the item id. in this context, the only way to generalize is to rely on the similarity between the patterns of values of the target variable for
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
493
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
early work on recommender systems relied on minimal information as inputs for these predictions : the user id and the item id. in this context, the only way to generalize is to rely on the similarity between the patterns of values of the target variable for [UNK] users or for [UNK] items. suppose that user 1 and user 2 both like items a, b and c. from this, we may infer that user 1 and 478
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
493
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 12. applications user 2 have similar tastes. if user 1 likes item d, then this should be a strong cue that user 2 will also like d. algorithms based on this principle come under the name of collaborative filtering. both non - parametric approaches ( such as nearest - neighbor methods based on the estimated similarity between patterns of preferences ) and parametric methods are possible. parametric methods often rely on learning a distributed representation ( also called an embedding ) for each user and for each item. bilinear prediction of the target variable ( such as a rating ) is a simple parametric method that is highly successful and often found as a component of state - of - the - art systems. the prediction is obtained by the dot product between the user embedding and the item embedding ( possibly corrected by constants that depend only on either the user id or the item id ). let [UNK] be the matrix containing our predictions, a a matrix with user embeddings in its rows and b a matrix with item embeddings in its columns. let b and c be vectors that contain respectively a kind of bias for each user ( representing how grumpy or positive that user is in general ) and for each item ( representing its general
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
494
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
a matrix with item embeddings in its columns. let b and c be vectors that contain respectively a kind of bias for each user ( representing how grumpy or positive that user is in general ) and for each item ( representing its general popularity ). the bilinear prediction is thus obtained as follows : [UNK], i = bu + ci + j au, jbj, i. ( 12. 20 ) typically one wants to minimize the squared error between predicted ratings [UNK], i and actual ratings ru, i. user embeddings and item embeddings can then be conveniently visualized when they are first reduced to a low dimension ( two or three ), or they can be used to compare users or items against each other, just like word embeddings. one way to obtain these embeddings is by performing a singular value decomposition of the matrix r of actual targets ( such as ratings ). this corresponds to factorizing r = udv ( or a normalized variant ) into the product of two factors, the lower rank matrices a = ud and b = v. one problem with the svd is that it treats the missing entries in an arbitrary way, as if they corresponded to a target value of 0.
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
494
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
variant ) into the product of two factors, the lower rank matrices a = ud and b = v. one problem with the svd is that it treats the missing entries in an arbitrary way, as if they corresponded to a target value of 0. instead we would like to avoid paying any cost for the predictions made on missing entries. fortunately, the sum of squared errors on the observed ratings can also be easily minimized by gradient - based optimization. the svd and the bilinear prediction of equation both 12. 20 performed very well in the competition for the netflix prize (, bennett and lanning 2007 ), aiming at predicting ratings for films, based only on previous ratings by a large set of anonymous users. many machine learning experts participated in this competition, which took place between 2006 and 2009. it raised the level of research in recommender systems using advanced machine learning and yielded improvements in recommender systems. even though it did not win by itself, the simple bilinear prediction or svd was a component of the ensemble models 479
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
494
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 12. applications presented by most of the competitors, including the winners (, ; toscher et al. 2009 koren 2009, ). beyond these bilinear models with distributed representations, one of the first uses of neural networks for collaborative filtering is based on the rbm undirected probabilistic model ( salakhutdinov 2007 et al., ). rbms were an important element of the ensemble of methods that won the netflix competition ( toscher 2009 et al., ; koren 2009, ). more advanced variants on the idea of factorizing the ratings matrix have also been explored in the neural networks community ( salakhutdinov and mnih 2008, ). however, there is a basic limitation of collaborative filtering systems : when a new item or a new user is introduced, its lack of rating history means that there is no way to evaluate its similarity with other items or users ( respectively ), or the degree of association between, say, that new user and existing items. this is called the problem of cold - start recommendations. a general way of solving the cold - start recommendation problem is to introduce extra information about the individual users and items. for example, this extra information could be
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
495
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
, that new user and existing items. this is called the problem of cold - start recommendations. a general way of solving the cold - start recommendation problem is to introduce extra information about the individual users and items. for example, this extra information could be user profile information or features of each item. systems that use such information are called content - based recommender systems. the mapping from a rich set of user features or item features to an embedding can be learned through a deep learning architecture (, ; huang et al. 2013 elkahky 2015 et al., ). specialized deep learning architectures such as convolutional networks have also been applied to learn to extract features from rich content such as from musical audio tracks, for music recommendation ( van den oord 2013 et al., ). in that work, the convolutional net takes acoustic features as input and computes an embedding for the associated song. the dot product between this song embedding and the embedding for a user is then used to predict whether a user will listen to the song. 12. 5. 1. 1 exploration versus exploitation when making recommendations to users, an issue arises that goes beyond ordinary supervised learning and into the realm of reinforcement learning
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
495
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
##ding for a user is then used to predict whether a user will listen to the song. 12. 5. 1. 1 exploration versus exploitation when making recommendations to users, an issue arises that goes beyond ordinary supervised learning and into the realm of reinforcement learning. many recom - mendation problems are most accurately described theoretically as contextual bandits (, ;, ). the issue is that when we langford and zhang 2008 lu et al. 2010 use the recommendation system to collect data, we get a biased and incomplete view of the preferences of users : we only see the responses of users to the items they were recommended and not to the other items. in addition, in some cases we may not get any information on users for whom no recommendation has been made ( for example, with ad auctions, it may be that the price proposed for an 480
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
495
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 12. applications ad was below a minimum price threshold, or does not win the auction, so the ad is not shown at all ). more importantly, we get no information about what outcome would have resulted from recommending any of the other items. this would be like training a classifier by picking one class [UNK] for each training example x ( typically the class with the highest probability according to the model ) and then only getting as feedback whether this was the correct class or not. clearly, each example conveys less information than in the supervised case where the true label y is directly accessible, so more examples are necessary. worse, if we are not careful, we could end up with a system that continues picking the wrong decisions even as more and more data is collected, because the correct decision initially had a very low probability : until the learner picks that correct decision, it does not learn about the correct decision. this is similar to the situation in reinforcement learning where only the reward for the selected action is observed. in general, reinforcement learning can involve a sequence of many actions and many rewards. the bandits scenario is a special case of reinforcement learning, in which the learner takes only a single action and receives a single reward. the bandit problem is easier in the
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
496
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
in general, reinforcement learning can involve a sequence of many actions and many rewards. the bandits scenario is a special case of reinforcement learning, in which the learner takes only a single action and receives a single reward. the bandit problem is easier in the sense that the learner knows which reward is associated with which action. in the general reinforcement learning scenario, a high reward or a low reward might have been caused by a recent action or by an action in the distant past. the term contextual bandits refers to the case where the action is taken in the context of some input variable that can inform the decision. for example, we at least know the user identity, and we want to pick an item. the mapping from context to action is also called a policy. the feedback loop between the learner and the data distribution ( which now depends on the actions of the learner ) is a central research issue in the reinforcement learning and bandits literature. reinforcement learning requires choosing a [UNK] exploration and exploitation. exploitation refers to taking actions that come from the current, best version of the learned policy — actions that we know will achieve a high reward. exploration refers to taking actions specifically in order to obtain more training data. if we know that given context x, action a gives
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
496
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
from the current, best version of the learned policy — actions that we know will achieve a high reward. exploration refers to taking actions specifically in order to obtain more training data. if we know that given context x, action a gives us a reward of 1, we do not know whether that is the best possible reward. we may want to exploit our current policy and continue taking action a in order to be relatively sure of obtaining a reward of 1. however, we may also want to explore by trying action a. we do not know what will happen if we try action a. we hope to get a reward of, but we 2 run the risk of getting a reward of. either way, we at least gain some knowledge. 0 exploration can be implemented in many ways, ranging from occasionally taking random actions intended to cover the entire space of possible actions, to model - based approaches that compute a choice of action based on its expected reward and the model ’ s amount of uncertainty about that reward. 481
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
496
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 12. applications many factors determine the extent to which we prefer exploration or exploitation. one of the most prominent factors is the time scale we are interested in. if the agent has only a short amount of time to accrue reward, then we prefer more exploitation. if the agent has a long time to accrue reward, then we begin with more exploration so that future actions can be planned more [UNK] with more knowledge. as time progresses and our learned policy improves, we move toward more exploitation. supervised learning has no [UNK] between exploration and exploitation because the supervision signal always specifies which output is correct for each input. there is no need to try out [UNK] outputs to determine if one is better than the model ’ s current output — we always know that the label is the best output. another [UNK] arising in the context of reinforcement learning, besides the exploration - exploitation trade - [UNK], is the [UNK] of evaluating and comparing [UNK] policies. reinforcement learning involves interaction between the learner and the environment. this feedback loop means that it is not straightforward to evaluate the learner ’ s performance using a fixed set of test set input values. the policy itself determines which inputs will be seen. ( ) present dudik et al. 2011 techniques for evaluating contextual bandits
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
497
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
that it is not straightforward to evaluate the learner ’ s performance using a fixed set of test set input values. the policy itself determines which inputs will be seen. ( ) present dudik et al. 2011 techniques for evaluating contextual bandits. 12. 5. 2 knowledge representation, reasoning and question an - swering deep learning approaches have been very successful in language modeling, machine translation and natural language processing due to the use of embeddings for symbols (, ) and words ( rumelhart et al. 1986a deerwester 1990 bengio et al., ; et al., 2001 ). these embeddings represent semantic knowledge about individual words and concepts. a research frontier is to develop embeddings for phrases and for relations between words and facts. search engines already use machine learning for this purpose but much more remains to be done to improve these more advanced representations. 12. 5. 2. 1 knowledge, relations and question answering one interesting research direction is determining how distributed representations can be trained to capture the relations between two entities. these relations allow us to formalize facts about objects and how objects interact with each other. in mathematics, a binary relation is a set of ordered pairs of objects. pairs that are in the set are
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
497
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
be trained to capture the relations between two entities. these relations allow us to formalize facts about objects and how objects interact with each other. in mathematics, a binary relation is a set of ordered pairs of objects. pairs that are in the set are said to have the relation while those who are not in the set 482
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
497
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 12. applications do not. for example, we can define the relation “ is less than ” on the set of entities { 1, 2, 3 } by defining the set of ordered pairs s = { ( 1, 2 ), ( 1, 3 ), ( 2, 3 ) }. once this relation is defined, we can use it like a verb. because ( 1, 2 ) ∈s, we say that 1 is less than 2. because ( 2, 1 ) ∈s, we can not say that 2 is less than 1. of course, the entities that are related to one another need not be numbers. we could define a relation containing tuples like (, ). is _ a _ type _ of dog mammal in the context of ai, we think of a relation as a sentence in a syntactically simple and highly structured language. the relation plays the role of a verb, while two arguments to the relation play the role of its subject and object. these sentences take the form of a triplet of tokens ( subject verb object ),, ( 12. 21 ) with values ( entityi, relationj, entityk ). ( 12. 22 ) we can also defi
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
498
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
and object. these sentences take the form of a triplet of tokens ( subject verb object ),, ( 12. 21 ) with values ( entityi, relationj, entityk ). ( 12. 22 ) we can also define an attribute, a concept analogous to a relation, but taking only one argument : ( entityi, attribute j ). ( 12. 23 ) for example, we could define the has _ fur attribute, and apply it to entities like dog. many applications require representing relations and reasoning about them. how should we best do this within the context of neural networks? machine learning models of course require training data. we can infer relations between entities from training datasets consisting of unstructured natural language. there are also structured databases that identify relations explicitly. a common structure for these databases is the relational database, which stores this same kind of information, albeit not formatted as three token sentences. when a database is intended to convey commonsense knowledge about everyday life or expert knowledge about an application area to an artificial intelligence system, we call the database a knowledge base. knowledge bases range from general ones like freebase, opencyc, wordnet, or wikibase, 1 etc. to more
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
498
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
expert knowledge about an application area to an artificial intelligence system, we call the database a knowledge base. knowledge bases range from general ones like freebase, opencyc, wordnet, or wikibase, 1 etc. to more specialized knowledge bases, like geneontology. 2 representations for entities and relations can be learned by considering each triplet in a knowledge base as a training example and maximizing a training objective that captures their joint distribution ( bordes et al., ). 2013a 1respectively available from these web sites : freebase. com, cyc. com / opencyc, wordnet. princeton. edu wikiba. se, 2geneontology. org 483
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
498
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 12. applications in addition to training data, we also need to define a model family to train. a common approach is to extend neural language models to model entities and relations. neural language models learn a vector that provides a distributed representation of each word. they also learn about interactions between words, such as which word is likely to come after a sequence of words, by learning functions of these vectors. we can extend this approach to entities and relations by learning an embedding vector for each relation. in fact, the parallel between modeling language and modeling knowledge encoded as relations is so close that researchers have trained representations of such entities by using both and knowledge bases natural language sentences (,, ; bordes et al. 2011 2012 wang 2014a et al., ) or combining data from multiple relational databases (, ). many bordes et al. 2013b possibilities exist for the particular parametrization associated with such a model. early work on learning about relations between entities (, paccanaro and hinton 2000 ) posited highly constrained parametric forms ( “ linear relational embeddings ” ), often using a [UNK] form of representation for the relation than for the entities. for example, paccanaro and hinton 2000 bordes 2011 ( ) and
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
499
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
posited highly constrained parametric forms ( “ linear relational embeddings ” ), often using a [UNK] form of representation for the relation than for the entities. for example, paccanaro and hinton 2000 bordes 2011 ( ) and et al. ( ) used vectors for entities and matrices for relations, with the idea that a relation acts like an operator on entities. alternatively, relations can be considered as any other entity ( bordes et al., ), allowing us to make statements about relations, but more flexibility is 2012 put in the machinery that combines them in order to model their joint distribution. a practical short - term application of such models is link prediction : predict - ing missing arcs in the knowledge graph. this is a form of generalization to new facts, based on old facts. most of the knowledge bases that currently exist have been constructed through manual labor, which tends to leave many and probably the majority of true relations absent from the knowledge base. see wang et al. ( ), ( ) and ( ) for examples of such an 2014b lin et al. 2015 garcia - duran et al. 2015 application. evaluating the performance of a model on a link prediction task is [UNK] because we have only a dataset of positive examples (
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
499
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
( ) and ( ) for examples of such an 2014b lin et al. 2015 garcia - duran et al. 2015 application. evaluating the performance of a model on a link prediction task is [UNK] because we have only a dataset of positive examples ( facts that are known to be true ). if the model proposes a fact that is not in the dataset, we are unsure whether the model has made a mistake or discovered a new, previously unknown fact. the metrics are thus somewhat imprecise and are based on testing how the model ranks a held - out of set of known true positive facts compared to other facts that are less likely to be true. a common way to construct interesting examples that are probably negative ( facts that are probably false ) is to begin with a true fact and create corrupted versions of that fact, for example by replacing one entity in the relation with a [UNK] entity selected at random. the popular precision at 10 % metric counts how many times the model ranks a “ correct ” fact among the top 10 % of all corrupted versions of that fact. 484
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
499
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 12. applications another application of knowledge bases and distributed representations for them is word - sense disambiguation ( navigli and velardi 2005 bordes, ; et al., 2012 ), which is the task of deciding which of the senses of a word is the appropriate one, in some context. eventually, knowledge of relations combined with a reasoning process and understanding of natural language could allow us to build a general question answering system. a general question answering system must be able to process input information and remember important facts, organized in a way that enables it to retrieve and reason about them later. this remains a [UNK] open problem which can only be solved in restricted “ toy ” environments. currently, the best approach to remembering and retrieving specific declarative facts is to use an explicit memory mechanism, as described in section. memory networks were 10. 12 first proposed to solve a toy question answering task ( weston 2014 kumar et al., ). et al. ( ) have proposed an extension that uses gru recurrent nets to read 2015 the input into the memory and to produce the answer given the contents of the memory. deep learning has been applied to many other applications besides the ones described here, and will surely be applied to
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
500
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
an extension that uses gru recurrent nets to read 2015 the input into the memory and to produce the answer given the contents of the memory. deep learning has been applied to many other applications besides the ones described here, and will surely be applied to even more after this writing. it would be impossible to describe anything remotely resembling a comprehensive coverage of such a topic. this survey provides a representative sample of what is possible as of this writing. this concludes part, which has described modern practices involving deep ii networks, comprising all of the most successful methods. generally speaking, these methods involve using the gradient of a cost function to find the parameters of a model that approximates some desired function. with enough training data, this approach is extremely powerful. we now turn to part, in which we step into the iii territory of research — methods that are designed to work with less training data or to perform a greater variety of tasks, where the challenges are more [UNK] and not as close to being solved as the situations we have described so far. 485
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
500
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
part iii deep learning research 486
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
501
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
this part of the book describes the more ambitious and advanced approaches to deep learning, currently pursued by the research community. in the previous parts of the book, we have shown how to solve supervised learning problems — how to learn to map one vector to another, given enough examples of the mapping. not all problems we might want to solve fall into this category. we may wish to generate new examples, or determine how likely some point is, or handle missing values and take advantage of a large set of unlabeled examples or examples from related tasks. a shortcoming of the current state of the art for industrial applications is that our learning algorithms require large amounts of supervised data to achieve good accuracy. in this part of the book, we discuss some of the speculative approaches to reducing the amount of labeled data necessary for existing models to work well and be applicable across a broader range of tasks. accomplishing these goals usually requires some form of unsupervised or semi - supervised learning. many deep learning algorithms have been designed to tackle unsupervised learning problems, but none have truly solved the problem in the same way that deep learning has largely solved the supervised learning problem for a wide variety of tasks. in this part of the book, we describe the existing approaches to un
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
502
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
unsupervised learning problems, but none have truly solved the problem in the same way that deep learning has largely solved the supervised learning problem for a wide variety of tasks. in this part of the book, we describe the existing approaches to unsupervised learning and some of the popular thought about how we can make progress in this field. a central cause of the [UNK] with unsupervised learning is the high di - mensionality of the random variables being modeled. this brings two distinct challenges : a statistical challenge and a computational challenge. the statistical challenge regards generalization : the number of configurations we may want to distinguish can grow exponentially with the number of dimensions of interest, and this quickly becomes much larger than the number of examples one can possibly have ( or use with bounded computational resources ). the computational challenge associated with high - dimensional distributions arises because many algorithms for learning or using a trained model ( especially those based on estimating an explicit probability function ) involve intractable computations that grow exponentially with the number of dimensions. with probabilistic models, this computational challenge arises from the need to perform intractable inference or simply from the need to normalize the distribution. • intractable inference : inference is discussed mostly in
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
502
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
that grow exponentially with the number of dimensions. with probabilistic models, this computational challenge arises from the need to perform intractable inference or simply from the need to normalize the distribution. • intractable inference : inference is discussed mostly in chapter. it regards 19 the question of guessing the probable values of some variables a, given other variables b, with respect to a model that captures the joint distribution over 487
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
502
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
a, b and c. in order to even compute such conditional probabilities one needs to sum over the values of the variables c, as well as compute a normalization constant which sums over the values of a and c. • intractable normalization constants ( the partition function ) : the partition function is discussed mostly in chapter. normalizing constants of proba - 18 bility functions come up in inference ( above ) as well as in learning. many probabilistic models involve such a normalizing constant. unfortunately, learning such a model often requires computing the gradient of the loga - rithm of the partition function with respect to the model parameters. that computation is generally as intractable as computing the partition function itself. monte carlo markov chain ( mcmc ) methods ( chapter ) are of - 17 ten used to deal with the partition function ( computing it or its gradient ). unfortunately, mcmc methods [UNK] when the modes of the model distribu - tion are numerous and well - separated, especially in high - dimensional spaces ( section ). 17. 5 one way to confront these intractable computations is to approximate them, and many approaches have been proposed as discussed in this third part of the book. another interesting way,
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
503
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
- separated, especially in high - dimensional spaces ( section ). 17. 5 one way to confront these intractable computations is to approximate them, and many approaches have been proposed as discussed in this third part of the book. another interesting way, also discussed here, would be to avoid these intractable computations altogether by design, and methods that do not require such computations are thus very appealing. several generative models have been proposed in recent years, with that motivation. a wide variety of contemporary approaches to generative modeling are discussed in chapter. 20 part is the most important for a researcher — someone who wants to un - iii derstand the breadth of perspectives that have been brought to the field of deep learning, and push the field forward towards true artificial intelligence. 488
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
503
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 13 linear factor models many of the research frontiers in deep learning involve building a probabilistic model of the input, pmodel ( x ). such a model can, in principle, use probabilistic inference to predict any of the variables in its environment given any of the other variables. many of these models also have latent variables h, with pmodel ( ) = x ehpmodel ( ) x h |. these latent variables provide another means of representing the data. distributed representations based on latent variables can obtain all of the advantages of representation learning that we have seen with deep feedforward and recurrent networks. in this chapter, we describe some of the simplest probabilistic models with latent variables : linear factor models. these models are sometimes used as building blocks of mixture models ( hinton 1995a ghahramani and hinton 1996 et al., ;, ; roweis 2002 tang 2012 et al., ) or larger, deep probabilistic models ( et al., ). they also show many of the basic approaches necessary to build generative models that the more advanced deep models will extend further. a linear factor model is defined by the use of a stochastic, linear
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
504
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
models ( et al., ). they also show many of the basic approaches necessary to build generative models that the more advanced deep models will extend further. a linear factor model is defined by the use of a stochastic, linear decoder function that generates by adding noise to a linear transformation of. x h these models are interesting because they allow us to discover explanatory factors that have a simple joint distribution. the simplicity of using a linear decoder made these models some of the first latent variable models to be extensively studied. a linear factor model describes the data generation process as follows. first, we sample the explanatory factors from a distribution h h [UNK], ( ) h ( 13. 1 ) where p ( h ) is a factorial distribution, with p ( h ) = i p ( hi ), so that it is easy to 489
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
504
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 13. linear factor models sample from. next we sample the real - valued observable variables given the factors : x w h b = + + noise ( 13. 2 ) where the noise is typically gaussian and diagonal ( independent across dimensions ). this is illustrated in figure. 13. 1 h1 h1 h2 h2 h3 h3 x1 x1 x2 x2 x3 x3 x h n ois e x h n ois e = w + + b = w + + b figure 13. 1 : the directed graphical model describing the linear factor model family, in which we assume that an observed data vector x is obtained by a linear combination of independent latent factors h, plus some noise. [UNK] models, such as probabilistic pca, factor analysis or ica, make [UNK] choices about the form of the noise and of the prior. p ( ) h 13. 1 probabilistic pca and factor analysis probabilistic pca ( principal components analysis ), factor analysis and other linear factor models are special cases of the above equations ( and ) and only 13. 1 13. 2 [UNK] in the choices made for the noise distribution and the model ’ s prior over latent
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
505
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
##a ( principal components analysis ), factor analysis and other linear factor models are special cases of the above equations ( and ) and only 13. 1 13. 2 [UNK] in the choices made for the noise distribution and the model ’ s prior over latent variables before observing. h x in factor analysis (, ;, ), the latent variable bartholomew 1987 basilevsky 1994 prior is just the unit variance gaussian h 0 [UNK] ( ; h, i ) ( 13. 3 ) while the observed variables xi are assumed to be conditionally independent, given h. specifically, the noise is assumed to be drawn from a diagonal co - variance gaussian distribution, with covariance matrix ψ = diag ( σ2 ), with σ2 = [ σ2 1, σ2 2,..., σ2 n ] a vector of per - variable variances. the role of the latent variables is thus to capture the dependencies between the [UNK] observed variables xi. indeed, it can easily be shown that x is just a multivariate normal random variable, with x [UNK] ( ; x b ww, + ) ψ. ( 13. 4 ) 490
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
505
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 13. linear factor models in order to cast pca in a probabilistic framework, we can make a slight modification to the factor analysis model, making the conditional variances σ2 i equal to each other. in that case the covariance of x is just ww + σ2i, where σ2 is now a scalar. this yields the conditional distribution x [UNK] ( ; x b ww, + σ2i ) ( 13. 5 ) or equivalently x h z = w + + b σ ( 13. 6 ) where z [UNK] ( z ; 0, i ) is gaussian noise. ( ) then show an tipping and bishop 1999 iterative em algorithm for estimating the parameters and w σ2. this probabilistic pca model takes advantage of the observation that most variations in the data can be captured by the latent variables h, up to some small residual reconstruction error σ2. as shown by ( ), tipping and bishop 1999 probabilistic pca becomes pca as σ →0. in that case, the conditional expected value of h given x becomes an orthogonal projection of x b − onto the space spanned by the columns of, like in pca. d w as σ
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
506
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
##bilistic pca becomes pca as σ →0. in that case, the conditional expected value of h given x becomes an orthogonal projection of x b − onto the space spanned by the columns of, like in pca. d w as σ →0, the density model defined by probabilistic pca becomes very sharp around these d dimensions spanned by the columns of w. this can make the model assign very low likelihood to the data if the data does not actually cluster near a hyperplane. 13. 2 independent component analysis ( ica ) independent component analysis ( ica ) is among the oldest representation learning algorithms (, ;, ;, ; herault and ans 1984 jutten and herault 1991 comon 1994 hyvarinen 1999 hyvarinen 2001a hinton 2001 teh 2003, ; et al., ; et al., ; et al., ). it is an approach to modeling linear factors that seeks to separate an observed signal into many underlying signals that are scaled and added together to form the observed data. these signals are intended to be fully independent, rather than merely decorrelated from each other. 1 many [UNK] specific methodologies are referred to as ica. the variant that is most
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
506
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
scaled and added together to form the observed data. these signals are intended to be fully independent, rather than merely decorrelated from each other. 1 many [UNK] specific methodologies are referred to as ica. the variant that is most similar to the other generative models we have described here is a variant (, ) that trains a fully parametric generative model. the pham et al. 1992 prior distribution over the underlying factors, p ( h ), must be fixed ahead of time by the user. the model then deterministically generates x = wh. we can perform a 1see section for a discussion of the [UNK] between uncorrelated variables and indepen - 3. 8 dent variables. 491
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
506
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 13. linear factor models nonlinear change of variables ( using equation ) to determine 3. 47 p ( x ). learning the model then proceeds as usual, using maximum likelihood. the motivation for this approach is that by choosing p ( h ) to be independent, we can recover underlying factors that are as close as possible to independent. this is commonly used, not to capture high - level abstract causal factors, but to recover low - level signals that have been mixed together. in this setting, each training example is one moment in time, each xi is one sensor ’ s observation of the mixed signals, and each hi is one estimate of one of the original signals. for example, we might have n people speaking simultaneously. if we have n [UNK] microphones placed in [UNK] locations, ica can detect the changes in the volume between each speaker as heard by each microphone, and separate the signals so that each h i contains only one person speaking clearly. this is commonly used in neuroscience for electroencephalography, a technology for recording electrical signals originating in the brain. many electrode sensors placed on the subject ’ s head are used to measure many electrical signals coming from the body. the experimenter is typically only interested in signals from the brain, but signals from the subject ’
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
507
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
for recording electrical signals originating in the brain. many electrode sensors placed on the subject ’ s head are used to measure many electrical signals coming from the body. the experimenter is typically only interested in signals from the brain, but signals from the subject ’ s heart and eyes are strong enough to confound measurements taken at the subject ’ s scalp. the signals arrive at the electrodes mixed together, so ica is necessary to separate the electrical signature of the heart from the signals originating in the brain, and to separate signals in [UNK] brain regions from each other. as mentioned before, many variants of ica are possible. some add some noise in the generation of x rather than using a deterministic decoder. most do not use the maximum likelihood criterion, but instead aim to make the elements of h = w −1x independent from each other. many criteria that accomplish this goal are possible. equation requires taking the determinant of 3. 47 w, which can be an expensive and numerically unstable operation. some variants of ica avoid this problematic operation by constraining to be orthogonal. w all variants of ica require that p ( h ) be non - gaussian. this is because if p ( h ) is an independent prior with gauss
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
507
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
variants of ica avoid this problematic operation by constraining to be orthogonal. w all variants of ica require that p ( h ) be non - gaussian. this is because if p ( h ) is an independent prior with gaussian components, then w is not identifiable. we can obtain the same distribution over p ( x ) for many values of w. this is very [UNK] from other linear factor models like probabilistic pca and factor analysis, that often require p ( h ) to be gaussian in order to make many operations on the model have closed form solutions. in the maximum likelihood approach where the user explicitly specifies the distribution, a typical choice is to use p ( hi ) = d dhiσ ( hi ). typical choices of these non - gaussian distributions have larger peaks near 0 than does the gaussian distribution, so we can also see most implementations of ica as learning sparse features. 492
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
507
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 13. linear factor models many variants of ica are not generative models in the sense that we use the phrase. in this book, a generative model either represents p ( x ) or can draw samples from it. many variants of ica only know how to transform between x and h, but do not have any way of representing p ( h ), and thus do not impose a distribution over p ( x ). for example, many ica variants aim to increase the sample kurtosis of h = w −1x, because high kurtosis indicates that p ( h ) is non - gaussian, but this is accomplished without explicitly representing p ( h ). this is because ica is more often used as an analysis tool for separating signals, rather than for generating data or estimating its density. just as pca can be generalized to the nonlinear autoencoders described in chapter, ica can be generalized to a nonlinear generative model, in which 14 we use a nonlinear function f to generate the observed data. see hyvarinen and pajunen 1999 ( ) for the initial work on nonlinear ica and its successful use with ensemble learning by ( ) and ( ). roberts and everson 2001 lappalainen et al.
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
508
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
generate the observed data. see hyvarinen and pajunen 1999 ( ) for the initial work on nonlinear ica and its successful use with ensemble learning by ( ) and ( ). roberts and everson 2001 lappalainen et al. 2000 another nonlinear extension of ica is the approach of nonlinear independent components estimation, or nice (, ), which stacks a series dinh et al. 2014 of invertible transformations ( encoder stages ) that have the property that the determinant of the jacobian of each transformation can be computed [UNK]. this makes it possible to compute the likelihood exactly and, like ica, attempts to transform the data into a space where it has a factorized marginal distribution, but is more likely to succeed thanks to the nonlinear encoder. because the encoder is associated with a decoder that is its perfect inverse, it is straightforward to generate samples from the model ( by first sampling from p ( h ) and then applying the decoder ). another generalization of ica is to learn groups of features, with statistical dependence allowed within a group but discouraged between groups ( hyvarinen and hoyer 1999 hyvarinen 2001b, ; et al., ). when the groups of related
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
508
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
##ization of ica is to learn groups of features, with statistical dependence allowed within a group but discouraged between groups ( hyvarinen and hoyer 1999 hyvarinen 2001b, ; et al., ). when the groups of related units are chosen to be non - overlapping, this is called independent subspace analysis. it is also possible to assign spatial coordinates to each hidden unit and form overlapping groups of spatially neighboring units. this encourages nearby units to learn similar features. when applied to natural images, this topographic ica approach learns gabor filters, such that neighboring features have similar orientation, location or frequency. many [UNK] phase [UNK] of similar gabor functions occur within each region, so that pooling over small regions yields translation invariance. 13. 3 slow feature analysis slow feature analysis ( sfa ) is a linear factor model that uses information from 493
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
508
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 13. linear factor models time signals to learn invariant features (, ). wiskott and sejnowski 2002 slow feature analysis is motivated by a general principle called the slowness principle. the idea is that the important characteristics of scenes change very slowly compared to the individual measurements that make up a description of a scene. for example, in computer vision, individual pixel values can change very rapidly. if a zebra moves from left to right across the image, an individual pixel will rapidly change from black to white and back again as the zebra ’ s stripes pass over the pixel. by comparison, the feature indicating whether a zebra is in the image will not change at all, and the feature describing the zebra ’ s position will change slowly. we therefore may wish to regularize our model to learn features that change slowly over time. the slowness principle predates slow feature analysis and has been applied to a wide variety of models (, ;, ;, ; hinton 1989 foldiak 1989 mobahi et al. 2009 bergstra and bengio 2009, ). in general, we can apply the slowness principle to any [UNK] model trained with gradient descent. the slowness principle may be introduced by adding a term to the cost function of the form λ t l f
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
509
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
bergstra and bengio 2009, ). in general, we can apply the slowness principle to any [UNK] model trained with gradient descent. the slowness principle may be introduced by adding a term to the cost function of the form λ t l f ( ( x ( + 1 ) t ) (, f x ( ) t ) ) ( 13. 7 ) where λ is a hyperparameter determining the strength of the slowness regularization term, t is the index into a time sequence of examples, f is the feature extractor to be regularized, and l is a loss function measuring the distance between f ( x ( ) t ) and f ( x ( + 1 ) t ). a common choice for is the mean squared [UNK]. l slow feature analysis is a particularly [UNK] application of the slowness principle. it is [UNK] because it is applied to a linear feature extractor, and can thus be trained in closed form. like some variants of ica, sfa is not quite a generative model per se, in the sense that it defines a linear map between input space and feature space but does not define a prior over feature space and thus does not impose a distribution on input space. p ( ) x the sfa algorithm (
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
509
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
se, in the sense that it defines a linear map between input space and feature space but does not define a prior over feature space and thus does not impose a distribution on input space. p ( ) x the sfa algorithm ( wiskott and sejnowski 2002, ) consists of defining f ( x ; θ ) to be a linear transformation, and solving the optimization problem min θ et ( ( f x ( + 1 ) t ) i −f ( x ( ) t ) i ) 2 ( 13. 8 ) subject to the constraints etf ( x ( ) t ) i = 0 ( 13. 9 ) and et [ ( f x ( ) t ) 2 i ] = 1. ( 13. 10 ) 494
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
509
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 13. linear factor models the constraint that the learned feature have zero mean is necessary to make the problem have a unique solution ; otherwise we could add a constant to all feature values and obtain a [UNK] solution with equal value of the slowness objective. the constraint that the features have unit variance is necessary to prevent the pathological solution where all features collapse to. like pca, the sfa features 0 are ordered, with the first feature being the slowest. to learn multiple features, we must also add the constraint [UNK] < j, et [ ( f x ( ) t ) if ( x ( ) t ) j ] = 0. ( 13. 11 ) this specifies that the learned features must be linearly decorrelated from each other. without this constraint, all of the learned features would simply capture the one slowest signal. one could imagine using other mechanisms, such as minimizing reconstruction error, to force the features to diversify, but this decorrelation mechanism admits a simple solution due to the linearity of sfa features. the sfa problem may be solved in closed form by a linear algebra package. sfa is typically used to learn nonlinear features by applying a nonlinear basis expansion to x before running sfa. for example
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
510
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
due to the linearity of sfa features. the sfa problem may be solved in closed form by a linear algebra package. sfa is typically used to learn nonlinear features by applying a nonlinear basis expansion to x before running sfa. for example, it is common to replace x by the quadratic basis expansion, a vector containing elements xixj for all i and j. linear sfa modules may then be composed to learn deep nonlinear slow feature extractors by repeatedly learning a linear sfa feature extractor, applying a nonlinear basis expansion to its output, and then learning another linear sfa feature extractor on top of that expansion. when trained on small spatial patches of videos of natural scenes, sfa with quadratic basis expansions learns features that share many characteristics with those of complex cells in v1 cortex ( berkes and wiskott 2005, ). when trained on videos of random motion within 3 - d computer rendered environments, deep sfa learns features that share many characteristics with the features represented by neurons in rat brains that are used for navigation ( franzius 2007 et al., ). sfa thus seems to be a reasonably biologically plausible model. a major advantage of sfa is that it is possibly to theoretically predict which features sfa will
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
510
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
in rat brains that are used for navigation ( franzius 2007 et al., ). sfa thus seems to be a reasonably biologically plausible model. a major advantage of sfa is that it is possibly to theoretically predict which features sfa will learn, even in the deep, nonlinear setting. to make such theoretical predictions, one must know about the dynamics of the environment in terms of configuration space ( e. g., in the case of random motion in the 3 - d rendered environment, the theoretical analysis proceeds from knowledge of the probability distribution over position and velocity of the camera ). given the knowledge of how the underlying factors actually change, it is possible to analytically solve for the optimal functions expressing these factors. in practice, experiments with deep sfa applied to simulated data seem to recover the theoretically predicted functions. 495
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
510
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 13. linear factor models this is in comparison to other learning algorithms where the cost function depends highly on specific pixel values, making it much more [UNK] to determine what features the model will learn. deep sfa has also been used to learn features for object recognition and pose estimation ( franzius 2008 et al., ). so far, the slowness principle has not become the basis for any state of the art applications. it is unclear what factor has limited its performance. we speculate that perhaps the slowness prior is too strong, and that, rather than imposing a prior that features should be approximately constant, it would be better to impose a prior that features should be easy to predict from one time step to the next. the position of an object is a useful feature regardless of whether the object ’ s velocity is high or low, but the slowness principle encourages the model to ignore the position of objects that have high velocity. 13. 4 sparse coding sparse coding (, ) is a linear factor model that has olshausen and field 1996 been heavily studied as an unsupervised feature learning and feature extraction mechanism. strictly speaking, the term “ sparse coding ” refers to the process of inferring the value of h in this model,
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
511
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
has olshausen and field 1996 been heavily studied as an unsupervised feature learning and feature extraction mechanism. strictly speaking, the term “ sparse coding ” refers to the process of inferring the value of h in this model, while “ sparse modeling ” refers to the process of designing and learning the model, but the term “ sparse coding ” is often used to refer to both. like most other linear factor models, it uses a linear decoder plus noise to obtain reconstructions of x, as specified in equation. more specifically, sparse 13. 2 coding models typically assume that the linear factors have gaussian noise with isotropic precision : β p, ( ) = ( ; + x h | n x w h b 1 βi ). ( 13. 12 ) the distribution p ( h ) is chosen to be one with sharp peaks near 0 ( olshausen and field 1996, ). common choices include factorized laplace, cauchy or factorized student - t distributions. for example, the laplace prior parametrized in terms of the sparsity penalty [UNK] is given by λ p h ( i ) = laplace ( hi ; 0, 2 λ ) = λ 4
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
511
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
or factorized student - t distributions. for example, the laplace prior parametrized in terms of the sparsity penalty [UNK] is given by λ p h ( i ) = laplace ( hi ; 0, 2 λ ) = λ 4e−1 2 λ h | i | ( 13. 13 ) and the student - prior by t p h ( i ) [UNK] 1 ( 1 + h2 i ν ) ν + 1 2. ( 13. 14 ) 496
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
511
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 13. linear factor models training sparse coding with maximum likelihood is intractable. instead, the training alternates between encoding the data and training the decoder to better reconstruct the data given the encoding. this approach will be justified further as a principled approximation to maximum likelihood later, in section. 19. 3 for models such as pca, we have seen the use of a parametric encoder function that predicts h and consists only of multiplication by a weight matrix. the encoder that we use with sparse coding is not a parametric encoder. instead, the encoder is an optimization algorithm, that solves an optimization problem in which we seek the single most likely code value : h∗ = ( ) = arg max f x h p. ( ) h x | ( 13. 15 ) when combined with equation and equation, this yields the following 13. 13 13. 12 optimization problem : arg max h p ( ) h x | ( 13. 16 ) = arg max h log ( ) p h x | ( 13. 17 ) = arg min h λ | | | | h 1 + β | | − | | x wh 2 2, ( 13. 18 ) where we
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
512
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
) = arg max h log ( ) p h x | ( 13. 17 ) = arg min h λ | | | | h 1 + β | | − | | x wh 2 2, ( 13. 18 ) where we have dropped terms not depending on h and divided by positive scaling factors to simplify the equation. due to the imposition of an l1 norm on h, this procedure will yield a sparse h∗ ( see section ). 7. 1. 2 to train the model rather than just perform inference, we alternate between minimization with respect to h and minimization with respect to w. in this presentation, we treat β as a hyperparameter. typically it is set to 1 because its role in this optimization problem is shared with λ and there is no need for both hyperparameters. in principle, we could also treat β as a parameter of the model and learn it. our presentation here has discarded some terms that do not depend on h but do depend on β. to learn β, these terms must be included, or β will collapse to. 0 not all approaches to sparse coding explicitly build a p ( h ) and a p ( x h | ). often we are just interested in learning a dictionary
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
512
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
β. to learn β, these terms must be included, or β will collapse to. 0 not all approaches to sparse coding explicitly build a p ( h ) and a p ( x h | ). often we are just interested in learning a dictionary of features with activation values that will often be zero when extracted using this inference procedure. if we sample h from a laplace prior, it is in fact a zero probability event for an element of h to actually be zero. the generative model itself is not especially sparse, only the feature extractor is. ( ) describe approximate goodfellow et al. 2013d 497
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
512
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 13. linear factor models inference in a [UNK] model family, the spike and slab sparse coding model, for which samples from the prior usually contain true zeros. the sparse coding approach combined with the use of the non - parametric encoder can in principle minimize the combination of reconstruction error and log - prior better than any specific parametric encoder. another advantage is that there is no generalization error to the encoder. a parametric encoder must learn how to map x to h in a way that generalizes. for unusual x that do not resemble the training data, a learned, parametric encoder may fail to find an h that results in accurate reconstruction or a sparse code. for the vast majority of formulations of sparse coding models, where the inference problem is convex, the optimization procedure will always find the optimal code ( unless degenerate cases such as replicated weight vectors occur ). obviously, the sparsity and reconstruction costs can still rise on unfamiliar points, but this is due to generalization error in the decoder weights, rather than generalization error in the encoder. the lack of generalization error in sparse coding ’ s optimization - based encoding process may result in better generalization
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
513
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
unfamiliar points, but this is due to generalization error in the decoder weights, rather than generalization error in the encoder. the lack of generalization error in sparse coding ’ s optimization - based encoding process may result in better generalization when sparse coding is used as a feature extractor for a classifier than when a parametric function is used to predict the code. coates and ng 2011 ( ) demonstrated that sparse coding features generalize better for object recognition tasks than the features of a related model based on a parametric encoder, the linear - sigmoid autoencoder. inspired by their work, goodfellow et al. ( ) showed that a variant of sparse coding generalizes better than other feature 2013d extractors in the regime where extremely few labels are available ( twenty or fewer labels per class ). the primary disadvantage of the non - parametric encoder is that it requires greater time to compute h given x because the non - parametric approach requires running an iterative algorithm. the parametric autoencoder approach, developed in chapter, uses only a fixed number of layers, often only one. another 14 disadvantage is that it is not straight - forward to back - propagate through the non
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
513
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
iterative algorithm. the parametric autoencoder approach, developed in chapter, uses only a fixed number of layers, often only one. another 14 disadvantage is that it is not straight - forward to back - propagate through the non - parametric encoder, which makes it [UNK] to pretrain a sparse coding model with an unsupervised criterion and then fine - tune it using a supervised criterion. modified versions of sparse coding that permit approximate derivatives do exist but are not widely used (, ). bagnell and bradley 2009 sparse coding, like other linear factor models, often produces poor samples, as shown in figure. this happens even when the model is able to reconstruct 13. 2 the data well and provide useful features for a classifier. the reason is that each individual feature may be learned well, but the factorial prior on the hidden code results in the model including random subsets of all of the features in each generated sample. this motivates the development of deeper models that can impose a non - 498
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
513
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 13. linear factor models figure 13. 2 : example samples and weights from a spike and slab sparse coding model trained on the mnist dataset. ( left ) the samples from the model do not resemble the training examples. at first glance, one might assume the model is poorly fit. the ( right ) weight vectors of the model have learned to represent penstrokes and sometimes complete digits. the model has thus learned useful features. the problem is that the factorial prior over features results in random subsets of features being combined. few such subsets are appropriate to form a recognizable mnist digit. this motivates the development of generative models that have more powerful distributions over their latent codes. figure reproduced with permission from goodfellow 2013d et al. ( ). factorial distribution on the deepest code layer, as well as the development of more sophisticated shallow models. 13. 5 manifold interpretation of pca linear factor models including pca and factor analysis can be interpreted as learning a manifold (, ). we can view probabilistic pca as hinton et al. 1997 defining a thin pancake - shaped region of high probability — a gaussian distribution that is very narrow along some axes, just as a pan
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
514
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
, ). we can view probabilistic pca as hinton et al. 1997 defining a thin pancake - shaped region of high probability — a gaussian distribution that is very narrow along some axes, just as a pancake is very flat along its vertical axis, but is elongated along other axes, just as a pancake is wide along its horizontal axes. this is illustrated in figure. pca can be interpreted as aligning this 13. 3 pancake with a linear manifold in a higher - dimensional space. this interpretation applies not just to traditional pca but also to any linear autoencoder that learns matrices w and v with the goal of making the reconstruction of x lie as close to x as possible, let the encoder be h x w = ( f ) = ( ) x µ −. ( 13. 19 ) 499
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
514
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 13. linear factor models the encoder computes a low - dimensional representation of h. with the autoencoder view, we have a decoder computing the reconstruction [UNK] h b v h = ( g ) = +. ( 13. 20 ) figure 13. 3 : flat gaussian capturing probability concentration near a low - dimensional manifold. the figure shows the upper half of the “ pancake ” above the “ manifold plane ” which goes through its middle. the variance in the direction orthogonal to the manifold is very small ( arrow pointing out of plane ) and can be considered like “ noise, ” while the other variances are large ( arrows in the plane ) and correspond to “ signal, ” and a coordinate system for the reduced - dimension data. the choices of linear encoder and decoder that minimize reconstruction error e [ | | − x [UNK] | | 2 ] ( 13. 21 ) correspond to v = w, µ = b = e [ x ] and the columns of w form an orthonormal basis which spans the same subspace as the principal eigenvectors of the covariance matrix c x µ x µ = [ ( e − ) ( − ) ]. ( 13. 22 ) in the
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
515
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
form an orthonormal basis which spans the same subspace as the principal eigenvectors of the covariance matrix c x µ x µ = [ ( e − ) ( − ) ]. ( 13. 22 ) in the case of pca, the columns of w are these eigenvectors, ordered by the magnitude of the corresponding eigenvalues ( which are all real and non - negative ). one can also show that eigenvalue λi of c corresponds to the variance of x in the direction of eigenvector v ( ) i. if x ∈rd and h ∈rd with d < d, then the 500
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
515
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 13. linear factor models optimal reconstruction error ( choosing,, and as above ) is µ b v w min [ e | | − x [UNK] | | 2 ] = d i d = + 1 λi. ( 13. 23 ) hence, if the covariance has rank d, the eigenvalues λd + 1 to λd are 0 and recon - struction error is 0. furthermore, one can also show that the above solution can be obtained by maximizing the variances of the elements of h, under orthogonal w, instead of minimizing reconstruction error. linear factor models are some of the simplest generative models and some of the simplest models that learn a representation of data. much as linear classifiers and linear regression models may be extended to deep feedforward networks, these linear factor models may be extended to autoencoder networks and deep probabilistic models that perform the same tasks but with a much more powerful and flexible model family. 501
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
516
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 14 autoencoders an autoencoder is a neural network that is trained to attempt to copy its input to its output. internally, it has a hidden layer h that describes a code used to represent the input. the network may be viewed as consisting of two parts : an encoder function h = f ( x ) and a decoder that produces a reconstruction r = g ( h ). this architecture is presented in figure. if an autoencoder succeeds in simply 14. 1 learning to set g ( f ( x ) ) = x everywhere, then it is not especially useful. instead, autoencoders are designed to be unable to learn to copy perfectly. usually they are restricted in ways that allow them to copy only approximately, and to copy only input that resembles the training data. because the model is forced to prioritize which aspects of the input should be copied, it often learns useful properties of the data. modern autoencoders have generalized the idea of an encoder and a de - coder beyond deterministic functions to stochastic mappings pencoder ( h x | ) and pdecoder ( ) x h |. the idea of autoencoders has been part of the historical
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
517
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
##coder and a de - coder beyond deterministic functions to stochastic mappings pencoder ( h x | ) and pdecoder ( ) x h |. the idea of autoencoders has been part of the historical landscape of neural networks for decades (, ;, ;, lecun 1987 bourlard and kamp 1988 hinton and zemel 1994 ). traditionally, autoencoders were used for dimensionality reduction or feature learning. recently, theoretical connections between autoencoders and latent variable models have brought autoencoders to the forefront of generative modeling, as we will see in chapter. autoencoders may be thought of as being 20 a special case of feedforward networks, and may be trained with all of the same techniques, typically minibatch gradient descent following gradients computed by back - propagation. unlike general feedforward networks, autoencoders may also be trained using recirculation ( hinton and mcclelland 1988, ), a learning algorithm based on comparing the activations of the network on the original input 502
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
517
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 14. autoencoders to the activations on the reconstructed input. recirculation is regarded as more biologically plausible than back - propagation, but is rarely used for machine learning applications. x r h f g figure 14. 1 : the general structure of an autoencoder, mapping an input to an output x ( called reconstruction ) r through an internal representation or code h. the autoencoder has two components : the encoder f ( mapping x to h ) and the decoder g ( mapping h to r ). 14. 1 undercomplete autoencoders copying the input to the output may sound useless, but we are typically not interested in the output of the decoder. instead, we hope that training the autoencoder to perform the input copying task will result in h taking on useful properties. one way to obtain useful features from the autoencoder is to constrain h to have smaller dimension than x. an autoencoder whose code dimension is less than the input dimension is called undercomplete. learning an undercomplete representation forces the autoencoder to capture the most salient features of the training data. the learning process is described simply as minimizing a
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
518
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
dimension is less than the input dimension is called undercomplete. learning an undercomplete representation forces the autoencoder to capture the most salient features of the training data. the learning process is described simply as minimizing a loss function l, g f ( x ( ( ) ) ) x ( 14. 1 ) where l is a loss function penalizing g ( f ( x ) ) for being dissimilar from x, such as the mean squared error. when the decoder is linear and l is the mean squared error, an undercomplete autoencoder learns to span the same subspace as pca. in this case, an autoencoder trained to perform the copying task has learned the principal subspace of the training data as a side - [UNK]. autoencoders with nonlinear encoder functions f and nonlinear decoder func - tions g can thus learn a more powerful nonlinear generalization of pca. unfortu - 503
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
518
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 14. autoencoders nately, if the encoder and decoder are allowed too much capacity, the autoencoder can learn to perform the copying task without extracting useful information about the distribution of the data. theoretically, one could imagine that an autoencoder with a one - dimensional code but a very powerful nonlinear encoder could learn to represent each training example x ( ) i with the code i. the decoder could learn to map these integer indices back to the values of specific training examples. this specific scenario does not occur in practice, but it illustrates clearly that an autoen - coder trained to perform the copying task can fail to learn anything useful about the dataset if the capacity of the autoencoder is allowed to become too great. 14. 2 regularized autoencoders undercomplete autoencoders, with code dimension less than the input dimension, can learn the most salient features of the data distribution. we have seen that these autoencoders fail to learn anything useful if the encoder and decoder are given too much capacity. a similar problem occurs if the hidden code is allowed to have dimension equal to the input, and in the overcom
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
519
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
have seen that these autoencoders fail to learn anything useful if the encoder and decoder are given too much capacity. a similar problem occurs if the hidden code is allowed to have dimension equal to the input, and in the overcomplete case in which the hidden code has dimension greater than the input. in these cases, even a linear encoder and linear decoder can learn to copy the input to the output without learning anything useful about the data distribution. ideally, one could train any architecture of autoencoder successfully, choosing the code dimension and the capacity of the encoder and decoder based on the complexity of distribution to be modeled. regularized autoencoders provide the ability to do so. rather than limiting the model capacity by keeping the encoder and decoder shallow and the code size small, regularized autoencoders use a loss function that encourages the model to have other properties besides the ability to copy its input to its output. these other properties include sparsity of the representation, smallness of the derivative of the representation, and robustness to noise or to missing inputs. a regularized autoencoder can be nonlinear and overcomplete but still learn something useful about the data distribution even if
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
519
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
##ity of the representation, smallness of the derivative of the representation, and robustness to noise or to missing inputs. a regularized autoencoder can be nonlinear and overcomplete but still learn something useful about the data distribution even if the model capacity is great enough to learn a trivial identity function. in addition to the methods described here which are most naturally interpreted as regularized autoencoders, nearly any generative model with latent variables and equipped with an inference procedure ( for computing latent representations given input ) may be viewed as a particular form of autoencoder. two generative modeling approaches that emphasize this connection with autoencoders are the descendants of the helmholtz machine (, ), such as the variational hinton et al. 1995b 504
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
519
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 14. autoencoders autoencoder ( section ) and the generative stochastic networks ( section ). 20. 10. 3 20. 12 these models naturally learn high - capacity, overcomplete encodings of the input and do not require regularization for these encodings to be useful. their encodings are naturally useful because the models were trained to approximately maximize the probability of the training data rather than to copy the input to the output. 14. 2. 1 sparse autoencoders a sparse autoencoder is simply an autoencoder whose training criterion involves a sparsity penalty ω ( h ) on the code layer h, in addition to the reconstruction error : l, g f ( x ( ( ) ) ) + ω ( ) x h ( 14. 2 ) where g ( h ) is the decoder output and typically we have h = f ( x ), the encoder output. sparse autoencoders are typically used to learn features for another task such as classification. an autoencoder that has been regularized to be sparse must respond to unique statistical features of the dataset it has been trained on, rather than simply acting as an identity function. in this way,
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
520
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
task such as classification. an autoencoder that has been regularized to be sparse must respond to unique statistical features of the dataset it has been trained on, rather than simply acting as an identity function. in this way, training to perform the copying task with a sparsity penalty can yield a model that has learned useful features as a byproduct. we can think of the penalty ω ( h ) simply as a regularizer term added to a feedforward network whose primary task is to copy the input to the output ( unsupervised learning objective ) and possibly also perform some supervised task ( with a supervised learning objective ) that depends on these sparse features. unlike other regularizers such as weight decay, there is not a straightforward bayesian interpretation to this regularizer. as described in section, training 5. 6. 1 with weight decay and other regularization penalties can be interpreted as a map approximation to bayesian inference, with the added regularizing penalty corresponding to a prior probability distribution over the model parameters. in this view, regularized maximum likelihood corresponds to maximizing p ( θ x | ), which is equivalent to maximizing log p ( x θ | ) + log p ( θ ). the log p (
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
520
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
probability distribution over the model parameters. in this view, regularized maximum likelihood corresponds to maximizing p ( θ x | ), which is equivalent to maximizing log p ( x θ | ) + log p ( θ ). the log p ( x θ | ) term is the usual data log - likelihood term and the log p ( θ ) term, the log - prior over parameters, incorporates the preference over particular values of θ. this view was described in section. regularized autoencoders defy such an interpretation 5. 6 because the regularizer depends on the data and is therefore by definition not a prior in the formal sense of the word. we can still think of these regularization terms as implicitly expressing a preference over functions. rather than thinking of the sparsity penalty as a regularizer for the copying task, we can think of the entire sparse autoencoder framework as approximating 505
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
520
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 14. autoencoders maximum likelihood training of a generative model that has latent variables. suppose we have a model with visible variables x and latent variables h, with an explicit joint distribution pmodel ( x h, ) = p model ( h ) pmodel ( x h | ). we refer to pmodel ( h ) as the model ’ s prior distribution over the latent variables, representing the model ’ s beliefs prior to seeing x. this is [UNK] from the way we have previously used the word “ prior, ” to refer to the distribution p ( θ ) encoding our beliefs about the model ’ s parameters before we have seen the training data. the log - likelihood can be decomposed as log pmodel ( ) = log x h pmodel ( ) h x,. ( 14. 3 ) we can think of the autoencoder as approximating this sum with a point estimate for just one highly likely value for h. this is similar to the sparse coding generative model ( section ), but with 13. 4 h being the output of the parametric encoder rather than the result of an optimization that infers the most likely h. from this point of view, with this chosen, we
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
521
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
coding generative model ( section ), but with 13. 4 h being the output of the parametric encoder rather than the result of an optimization that infers the most likely h. from this point of view, with this chosen, we are maximizing h log pmodel ( ) = log h x, pmodel ( ) + log h pmodel ( ) x h |. ( 14. 4 ) the log pmodel ( ) h term can be sparsity - inducing. for example, the laplace prior, pmodel ( hi ) = λ 2 e− | λ hi |, ( 14. 5 ) corresponds to an absolute value sparsity penalty. expressing the log - prior as an absolute value penalty, we obtain ω ( ) = h λ i | hi | ( 14. 6 ) −log pmodel ( ) = h i λ h | i | −log λ 2 = ω ( ) + const h ( 14. 7 ) where the constant term depends only on λ and not h. we typically treat λ as a hyperparameter and discard the constant term since it does not [UNK] the parameter learning. other priors such as the student - t prior can also induce sparsity.
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
521
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
term depends only on λ and not h. we typically treat λ as a hyperparameter and discard the constant term since it does not [UNK] the parameter learning. other priors such as the student - t prior can also induce sparsity. from this point of view of sparsity as resulting from the [UNK] of pmodel ( h ) on approximate maximum likelihood learning, the sparsity penalty is not a regularization term at all. it is just a consequence of the model ’ s distribution over its latent variables. this view provides a [UNK] motivation for training an autoencoder : it is a way of approximately training a generative model. it also provides a [UNK] reason for 506
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
521
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 14. autoencoders why the features learned by the autoencoder are useful : they describe the latent variables that explain the input. early work on sparse autoencoders (,, ) explored ranzato et al. 2007a 2008 various forms of sparsity and proposed a connection between the sparsity penalty and the log z term that arises when applying maximum likelihood to an undirected probabilistic model p ( x ) = 1 z [UNK] ( x ). the idea is that minimizing log z prevents a probabilistic model from having high probability everywhere, and imposing sparsity on an autoencoder prevents the autoencoder from having low reconstruction error everywhere. in this case, the connection is on the level of an intuitive understanding of a general mechanism rather than a mathematical correspondence. the interpretation of the sparsity penalty as corresponding to log pmodel ( h ) in a directed model pmodel ( ) h pmodel ( ) x h | is more mathematically straightforward. one way to achieve actual zeros in h for sparse ( and denoising ) autoencoders was introduced in ( ). the idea is to use rectified linear units to glorot
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
522
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
h | is more mathematically straightforward. one way to achieve actual zeros in h for sparse ( and denoising ) autoencoders was introduced in ( ). the idea is to use rectified linear units to glorot et al. 2011b produce the code layer. with a prior that actually pushes the representations to zero ( like the absolute value penalty ), one can thus indirectly control the average number of zeros in the representation. 14. 2. 2 denoising autoencoders rather than adding a penalty to the cost function, we can obtain an autoencoder ω that learns something useful by changing the reconstruction error term of the cost function. traditionally, autoencoders minimize some function l, g f ( x ( ( ) ) ) x ( 14. 8 ) where l is a loss function penalizing g ( f ( x ) ) for being dissimilar from x, such as the l2 norm of their [UNK]. this encourages g f [UNK] to learn to be merely an identity function if they have the capacity to do so. a or dae instead minimizes denoising autoencoder l, g f ( x ( ( [UNK] ) ) ), ( 14. 9 ) where [UNK] is
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
522
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
to be merely an identity function if they have the capacity to do so. a or dae instead minimizes denoising autoencoder l, g f ( x ( ( [UNK] ) ) ), ( 14. 9 ) where [UNK] is a copy of x that has been corrupted by some form of noise. denoising autoencoders must therefore undo this corruption rather than simply copying their input. denoising training forces f and g to implicitly learn the structure of pdata ( x ), as shown by ( ) and ( ). denoising alain and bengio 2013 bengio et al. 2013c 507
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
522
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 14. autoencoders autoencoders thus provide yet another example of how useful properties can emerge as a byproduct of minimizing reconstruction error. they are also an example of how overcomplete, high - capacity models may be used as autoencoders so long as care is taken to prevent them from learning the identity function. denoising autoencoders are presented in more detail in section. 14. 5 14. 2. 3 regularizing by penalizing derivatives another strategy for regularizing an autoencoder is to use a penalty as in sparse ω autoencoders, l, g f,, ( x ( ( ) ) ) + ω ( x h x ) ( 14. 10 ) but with a [UNK] form of : ω ω ( ) = h x, λ i | | ∇xh i | | 2. ( 14. 11 ) this forces the model to learn a function that does not change much when x changes slightly. because this penalty is applied only at training examples, it forces the autoencoder to learn features that capture information about the training distribution. an autoencoder regularized in this way is called a contractive autoencoder or cae. this approach has
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
523
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
applied only at training examples, it forces the autoencoder to learn features that capture information about the training distribution. an autoencoder regularized in this way is called a contractive autoencoder or cae. this approach has theoretical connections to denoising autoencoders, manifold learning and probabilistic modeling. the cae is described in more detail in section. 14. 7 14. 3 representational power, layer size and depth autoencoders are often trained with only a single layer encoder and a single layer decoder. however, this is not a requirement. in fact, using deep encoders and decoders [UNK] many advantages. recall from section that there are many advantages to depth in a feedfor - 6. 4. 1 ward network. because autoencoders are feedforward networks, these advantages also apply to autoencoders. moreover, the encoder is itself a feedforward network as is the decoder, so each of these components of the autoencoder can individually benefit from depth. one major advantage of non - trivial depth is that the universal approximator theorem guarantees that a feedforward neural network with at least one hidden layer can represent an
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
523
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
these components of the autoencoder can individually benefit from depth. one major advantage of non - trivial depth is that the universal approximator theorem guarantees that a feedforward neural network with at least one hidden layer can represent an approximation of any function ( within a broad class ) to an 508
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
523
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 14. autoencoders arbitrary degree of accuracy, provided that it has enough hidden units. this means that an autoencoder with a single hidden layer is able to represent the identity function along the domain of the data arbitrarily well. however, the mapping from input to code is shallow. this means that we are not able to enforce arbitrary constraints, such as that the code should be sparse. a deep autoencoder, with at least one additional hidden layer inside the encoder itself, can approximate any mapping from input to code arbitrarily well, given enough hidden units. depth can exponentially reduce the computational cost of representing some functions. depth can also exponentially decrease the amount of training data needed to learn some functions. see section for a review of the advantages of 6. 4. 1 depth in feedforward networks. experimentally, deep autoencoders yield much better compression than corre - sponding shallow or linear autoencoders ( hinton and salakhutdinov 2006, ). a common strategy for training a deep autoencoder is to greedily pretrain the deep architecture by training a stack of shallow autoencoders, so we often encounter shallow autoencoders, even
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
524
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
##akhutdinov 2006, ). a common strategy for training a deep autoencoder is to greedily pretrain the deep architecture by training a stack of shallow autoencoders, so we often encounter shallow autoencoders, even when the ultimate goal is to train a deep autoencoder. 14. 4 stochastic encoders and decoders autoencoders are just feedforward networks. the same loss functions and output unit types that can be used for traditional feedforward networks are also used for autoencoders. as described in section, a general strategy for designing the output units 6. 2. 2. 4 and the loss function of a feedforward network is to define an output distribution p ( y x | ) and minimize the negative log - likelihood −log p ( y x | ). in that setting, y was a vector of targets, such as class labels. in the case of an autoencoder, x is now the target as well as the input. however, we can still apply the same machinery as before. given a hidden code h, we may think of the decoder as providing a conditional distribution p decoder ( x h | ). we may then train the auto
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
524
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
well as the input. however, we can still apply the same machinery as before. given a hidden code h, we may think of the decoder as providing a conditional distribution p decoder ( x h | ). we may then train the autoencoder by minimizing −log pdecoder ( ) x h |. the exact form of this loss function will change depending on the form of pdecoder. as with traditional feedforward networks, we usually use linear output units to parametrize the mean of a gaussian distribution if x is real - valued. in that case, the negative log - likelihood yields a mean squared error criterion. similarly, binary x values correspond to a bernoulli distribution whose parameters are given by a sigmoid output unit, discrete x values correspond to a softmax distribution, and so on. 509
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
524
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 14. autoencoders typically, the output variables are treated as being conditionally independent given h so that this probability distribution is inexpensive to evaluate, but some techniques such as mixture density outputs allow tractable modeling of outputs with correlations. x r h pencoder ( ) h x | pdecoder ( ) x h | figure 14. 2 : the structure of a stochastic autoencoder, in which both the encoder and the decoder are not simple functions but instead involve some noise injection, meaning that their output can be seen as sampled from a distribution, pencoder ( h x | ) for the encoder and pdecoder ( ) x h | for the decoder. to make a more radical departure from the feedforward networks we have seen previously, we can also generalize the notion of an encoding function f ( x ) to an encoding distribution pencoder ( ) h x |, as illustrated in figure. 14. 2 any latent variable model pmodel ( ) h x, defines a stochastic encoder pencoder ( ) = h x | pmodel ( ) h x | ( 14. 12 ) and a stochastic decoder pdec
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
525
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
model pmodel ( ) h x, defines a stochastic encoder pencoder ( ) = h x | pmodel ( ) h x | ( 14. 12 ) and a stochastic decoder pdecoder ( ) = x h | pmodel ( ) x h |. ( 14. 13 ) in general, the encoder and decoder distributions are not necessarily conditional distributions compatible with a unique joint distribution pmodel ( x h, ). alain et al. ( ) showed that training the encoder and decoder as a denoising autoencoder 2015 will tend to make them compatible asymptotically ( with enough capacity and examples ). 14. 5 denoising autoencoders the denoising autoencoder ( dae ) is an autoencoder that receives a corrupted data point as input and is trained to predict the original, uncorrupted data point as its output. the dae training procedure is illustrated in figure. we introduce a 14. 3 corruption process c ( [UNK] x | ) which represents a conditional distribution over 510
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
525
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 14. autoencoders [UNK] l h f g x c ( [UNK] x | ) figure 14. 3 : the computational graph of the cost function for a denoising autoencoder, which is trained to reconstruct the clean data point x from its corrupted version [UNK]. this is accomplished by minimizing the loss l = −log pdecoder ( x h | = f ( [UNK] ) ), where [UNK] is a corrupted version of the data example x, obtained through a given corruption process c ( [UNK] x | ). typically the distribution pdecoder is a factorial distribution whose mean parameters are emitted by a feedforward network. g corrupted samples [UNK], given a data sample x. the autoencoder then learns a reconstruction distribution preconstruct ( x | [UNK] ) estimated from training pairs ( x, [UNK] ), as follows : 1. sample a training example from the training data. x 2. sample a corrupted version [UNK] from c ( [UNK] x | = ) x. 3. use ( x, [UNK] ) as a training example for estimating the autoencoder reconstruction distribution preconstruct ( x | [UNK] ) = pdecoder ( x h | ) with h the output of encoder
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
526
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
. 3. use ( x, [UNK] ) as a training example for estimating the autoencoder reconstruction distribution preconstruct ( x | [UNK] ) = pdecoder ( x h | ) with h the output of encoder f ( [UNK] ) and pdecoder typically defined by a decoder. g ( ) h typically we can simply perform gradient - based approximate minimization ( such as minibatch gradient descent ) on the negative log - likelihood −log pdecoder ( x h | ). so long as the encoder is deterministic, the denoising autoencoder is a feedforward network and may be trained with exactly the same techniques as any other feedforward network. we can therefore view the dae as performing stochastic gradient descent on the following expectation : [UNK] ( ) x e [UNK] [UNK] ( [UNK] | x ) log pdecoder ( = ( x h | f [UNK] ) ) ( 14. 14 ) where [UNK] ( ) x is the training distribution. 511
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
526
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 14. autoencoders x [UNK] g f [UNK] [UNK] c ( [UNK] x | ) x figure 14. 4 : a denoising autoencoder is trained to map a corrupted data [UNK] back to the original data point x. we illustrate training examples x as red crosses lying near a low - dimensional manifold illustrated with the bold black line. we illustrate the corruption processc ( [UNK] x | ) with a gray circle of equiprobable corruptions. a gray arrow demonstrates how one training example is transformed into one sample from this corruption process. when the denoising autoencoder is trained to minimize the average of squared errors | | g ( f ( [UNK] ) ) − | | x 2, the reconstruction g ( f ( [UNK] ) ) estimates ex, [UNK] ( ) ( x c [UNK] x | ) [ x | [UNK] ]. the vector g ( f ( [UNK] ) ) [UNK] points approximately towards the nearest point on the manifold, sinceg ( f ( [UNK] ) ) estimates the center of mass of the clean pointsx which could have given rise to [UNK]. the autoencoder thus learns a vector field g ( f ( x ) ) −x indicated by the green arrows. this vector field estimates the score ∇xlog
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
527
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0