text
stringlengths 35
1.54k
| source
stringclasses 1
value | page
int64 1
800
| book
stringclasses 1
value | chunk_index
int64 0
0
|
|---|---|---|---|---|
chapter 11. practical methodology monitor histograms of activations and gradient : it is often useful to visualize statistics of neural network activations and gradients, collected over a large amount of training iterations ( maybe one epoch ). the pre - activation value of hidden units can tell us if the units saturate, or how often they do. for example, for rectifiers, how often are they [UNK]? are there units that are always [UNK]? for tanh units, the average of the absolute value of the pre - activations tells us how saturated the unit is. in a deep network where the propagated gradients quickly grow or quickly vanish, optimization may be hampered. finally, it is useful to compare the magnitude of parameter gradients to the magnitude of the parameters themselves. as suggested by ( ), we would like the magnitude of parameter updates bottou 2015 over a minibatch to represent something like 1 % of the magnitude of the parameter, not 50 % or 0. 001 % ( which would make the parameters move too slowly ). it may be that some groups of parameters are moving at a good pace while others are stalled. when the data is sparse ( like in natural language ), some parameters may be very rarely updated,
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 455
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
( which would make the parameters move too slowly ). it may be that some groups of parameters are moving at a good pace while others are stalled. when the data is sparse ( like in natural language ), some parameters may be very rarely updated, and this should be kept in mind when monitoring their evolution. finally, many deep learning algorithms provide some sort of guarantee about the results produced at each step. for example, in part, we will see some approx - iii imate inference algorithms that work by using algebraic solutions to optimization problems. typically these can be debugged by testing each of their guarantees. some guarantees that some optimization algorithms [UNK] include that the objective function will never increase after one step of the algorithm, that the gradient with respect to some subset of variables will be zero after each step of the algorithm, and that the gradient with respect to all variables will be zero at convergence. usually due to rounding error, these conditions will not hold exactly in a digital computer, so the debugging test should include some tolerance parameter. 11. 6 example : multi - digit number recognition to provide an end - to - end description of how to apply our design methodology in practice, we present a brief account of the street view transcription system, from the point of view of designing
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 455
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
parameter. 11. 6 example : multi - digit number recognition to provide an end - to - end description of how to apply our design methodology in practice, we present a brief account of the street view transcription system, from the point of view of designing the deep learning components. obviously, many other components of the complete system, such as the street view cars, the database infrastructure, and so on, were of paramount importance. from the point of view of the machine learning task, the process began with data collection. the cars collected the raw data and human operators provided labels. the transcription task was preceded by a significant amount of dataset curation, including using other machine learning techniques to detect the house 440
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 455
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 11. practical methodology numbers prior to transcribing them. the transcription project began with a choice of performance metrics and desired values for these metrics. an important general principle is to tailor the choice of metric to the business goals for the project. because maps are only useful if they have high accuracy, it was important to set a high accuracy requirement for this project. specifically, the goal was to obtain human - level, 98 % accuracy. this level of accuracy may not always be feasible to obtain. in order to reach this level of accuracy, the street view transcription system sacrifices coverage. coverage thus became the main performance metric optimized during the project, with accuracy held at 98 %. as the convolutional network improved, it became possible to reduce the confidence threshold below which the network refuses to transcribe the input, eventually exceeding the goal of 95 % coverage. after choosing quantitative goals, the next step in our recommended methodol - ogy is to rapidly establish a sensible baseline system. for vision tasks, this means a convolutional network with rectified linear units. the transcription project began with such a model. at the time, it was not common for a con
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 456
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
is to rapidly establish a sensible baseline system. for vision tasks, this means a convolutional network with rectified linear units. the transcription project began with such a model. at the time, it was not common for a convolutional network to output a sequence of predictions. in order to begin with the simplest possible baseline, the first implementation of the output layer of the model consisted of n [UNK] softmax units to predict a sequence of n characters. these softmax units were trained exactly the same as if the task were classification, with each softmax unit trained independently. our recommended methodology is to iteratively refine the baseline and test whether each change makes an improvement. the first change to the street view transcription system was motivated by a theoretical understanding of the coverage metric and the structure of the data. specifically, the network refuses to classify an input x whenever the probability of the output sequence p ( y x | ) < t for some threshold t. initially, the definition of p ( y x | ) was ad - hoc, based on simply multiplying all of the softmax outputs together. this motivated the development of a specialized output layer and cost function that actually computed
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 456
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
t. initially, the definition of p ( y x | ) was ad - hoc, based on simply multiplying all of the softmax outputs together. this motivated the development of a specialized output layer and cost function that actually computed a principled log - likelihood. this approach allowed the example rejection mechanism to function much more [UNK]. at this point, coverage was still below 90 %, yet there were no obvious theoretical problems with the approach. our methodology therefore suggests to instrument the train and test set performance in order to determine whether the problem is underfitting or overfitting. in this case, train and test set error were nearly identical. indeed, the main reason this project proceeded so smoothly was the availability of a dataset with tens of millions of labeled examples. because train and test set error were so similar, this suggested that the problem was either due 441
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 456
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 11. practical methodology to underfitting or due to a problem with the training data. one of the debugging strategies we recommend is to visualize the model ’ s worst errors. in this case, that meant visualizing the incorrect training set transcriptions that the model gave the highest confidence. these proved to mostly consist of examples where the input image had been cropped too tightly, with some of the digits of the address being removed by the cropping operation. for example, a photo of an address “ 1849 ” might be cropped too tightly, with only the “ 849 ” remaining visible. this problem could have been resolved by spending weeks improving the accuracy of the address number detection system responsible for determining the cropping regions. instead, the team took a much more practical decision, to simply expand the width of the crop region to be systematically wider than the address number detection system predicted. this single change added ten percentage points to the transcription system ’ s coverage. finally, the last few percentage points of performance came from adjusting hyperparameters. this mostly consisted of making the model larger while main - taining some restrictions on its computational cost. because train and test error remained roughly equal, it was always clear that any performance deficits
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 457
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
of performance came from adjusting hyperparameters. this mostly consisted of making the model larger while main - taining some restrictions on its computational cost. because train and test error remained roughly equal, it was always clear that any performance deficits were due to underfitting, as well as due to a few remaining problems with the dataset itself. overall, the transcription project was a great success, and allowed hundreds of millions of addresses to be transcribed both faster and at lower cost than would have been possible via human [UNK]. we hope that the design principles described in this chapter will lead to many other similar successes. 442
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 457
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 12 applications in this chapter, we describe how to use deep learning to solve applications in com - puter vision, speech recognition, natural language processing, and other application areas of commercial interest. we begin by discussing the large scale neural network implementations required for most serious ai applications. next, we review several specific application areas that deep learning has been used to solve. while one goal of deep learning is to design algorithms that are capable of solving a broad variety of tasks, so far some degree of specialization is needed. for example, vision tasks require processing a large number of input features ( pixels ) per example. language tasks require modeling a large number of possible values ( words in the vocabulary ) per input feature. 12. 1 large - scale deep learning deep learning is based on the philosophy of connectionism : while an individual biological neuron or an individual feature in a machine learning model is not intelligent, a large population of these neurons or features acting together can exhibit intelligent behavior. it truly is important to emphasize the fact that the number of neurons must be large. one of the key factors responsible for the improvement in neural network ’ s accuracy and the improvement of the complexity of tasks they can solve between the 1980s and today is the dramatic increase in the size of the networks
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 458
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
that the number of neurons must be large. one of the key factors responsible for the improvement in neural network ’ s accuracy and the improvement of the complexity of tasks they can solve between the 1980s and today is the dramatic increase in the size of the networks we use. as we saw in section, network sizes have 1. 2. 3 grown exponentially for the past three decades, yet artificial neural networks are only as large as the nervous systems of insects. because the size of neural networks is of paramount importance, deep learning 443
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 458
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 12. applications requires high performance hardware and software infrastructure. 12. 1. 1 fast cpu implementations traditionally, neural networks were trained using the cpu of a single machine. today, this approach is generally considered [UNK]. we now mostly use gpu computing or the cpus of many machines networked together. before moving to these expensive setups, researchers worked hard to demonstrate that cpus could not manage the high computational workload required by neural networks. a description of how to implement [UNK] numerical cpu code is beyond the scope of this book, but we emphasize here that careful implementation for specific cpu families can yield large improvements. for example, in 2011, the best cpus available could run neural network workloads faster when using fixed - point arithmetic rather than floating - point arithmetic. by creating a carefully tuned fixed - point implementation, vanhoucke 2011 et al. ( ) obtained a threefold speedup over a strong floating - point system. each new model of cpu has [UNK] performance characteristics, so sometimes floating - point implementations can be faster too. the important principle is that careful specialization of numerical computation routines can yield a large [UNK]. other strategies, besides choosing whether to use fixed or floating point, include
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 459
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
performance characteristics, so sometimes floating - point implementations can be faster too. the important principle is that careful specialization of numerical computation routines can yield a large [UNK]. other strategies, besides choosing whether to use fixed or floating point, include optimizing data structures to avoid cache misses and using vector instructions. many machine learning researchers neglect these implementation details, but when the performance of an implementation restricts the size of the model, the accuracy of the model [UNK]. 12. 1. 2 gpu implementations most modern neural network implementations are based on graphics processing units. graphics processing units ( gpus ) are specialized hardware components that were originally developed for graphics applications. the consumer market for video gaming systems spurred development of graphics processing hardware. the performance characteristics needed for good video gaming systems turn out to be beneficial for neural networks as well. video game rendering requires performing many operations in parallel quickly. models of characters and environments are specified in terms of lists of 3 - d coordinates of vertices. graphics cards must perform matrix multiplication and division on many vertices in parallel to convert these 3 - d coordinates into 2 - d on - screen coordinates. the graphics card must then perform many computations at each pixel in parallel to determine the color of each pixel. in
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 459
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
must perform matrix multiplication and division on many vertices in parallel to convert these 3 - d coordinates into 2 - d on - screen coordinates. the graphics card must then perform many computations at each pixel in parallel to determine the color of each pixel. in both cases, the 444
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 459
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 12. applications computations are fairly simple and do not involve much branching compared to the computational workload that a cpu usually encounters. for example, each vertex in the same rigid object will be multiplied by the same matrix ; there is no need to evaluate an if statement per - vertex to determine which matrix to multiply by. the computations are also entirely independent of each other, and thus may be parallelized easily. the computations also involve processing massive [UNK] of memory, containing bitmaps describing the texture ( color pattern ) of each object to be rendered. together, this results in graphics cards having been designed to have a high degree of parallelism and high memory bandwidth, at the cost of having a lower clock speed and less branching capability relative to traditional cpus. neural network algorithms require the same performance characteristics as the real - time graphics algorithms described above. neural networks usually involve large and numerous [UNK] of parameters, activation values, and gradient values, each of which must be completely updated during every step of training. these [UNK] are large enough to fall outside the cache of a traditional desktop computer so the memory bandwidth of the system often becomes the rate limiting factor. gpus [UNK] a compelling advantage over cpus due to their high memory bandwidth. neural network training algorithms typically do
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 460
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
[UNK] are large enough to fall outside the cache of a traditional desktop computer so the memory bandwidth of the system often becomes the rate limiting factor. gpus [UNK] a compelling advantage over cpus due to their high memory bandwidth. neural network training algorithms typically do not involve much branching or sophisticated control, so they are appropriate for gpu hardware. since neural networks can be divided into multiple individual “ neurons ” that can be processed independently from the other neurons in the same layer, neural networks easily benefit from the parallelism of gpu computing. gpu hardware was originally so specialized that it could only be used for graphics tasks. over time, gpu hardware became more flexible, allowing custom subroutines to be used to transform the coordinates of vertices or assign colors to pixels. in principle, there was no requirement that these pixel values actually be based on a rendering task. these gpus could be used for scientific computing by writing the output of a computation to a [UNK] of pixel values. steinkrau et al. ( ) implemented a two - layer fully connected neural network on a gpu and 2005 reported a threefold speedup over their cpu - based baseline. shortly thereafter, chellapilla 2006 et al. ( ) demonstrated that
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 460
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##krau et al. ( ) implemented a two - layer fully connected neural network on a gpu and 2005 reported a threefold speedup over their cpu - based baseline. shortly thereafter, chellapilla 2006 et al. ( ) demonstrated that the same technique could be used to accelerate supervised convolutional networks. the popularity of graphics cards for neural network training exploded after the advent of general purpose gpus. these gp - gpus could execute arbitrary code, not just rendering subroutines. nvidia ’ s cuda programming language provided a way to write this arbitrary code in a c - like language. with their relatively convenient programming model, massive parallelism, and high memory 445
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 460
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 12. applications bandwidth, gp - gpus now [UNK] an ideal platform for neural network programming. this platform was rapidly adopted by deep learning researchers soon after it became available (, ;, ). raina et al. 2009 ciresan et al. 2010 writing [UNK] code for gp - gpus remains a [UNK] task best left to spe - cialists. the techniques required to obtain good performance on gpu are very [UNK] from those used on cpu. for example, good cpu - based code is usually designed to read information from the cache as much as possible. on gpu, most writable memory locations are not cached, so it can actually be faster to compute the same value twice, rather than compute it once and read it back from memory. gpu code is also inherently multi - threaded and the [UNK] threads must be coordinated with each other carefully. for example, memory operations are faster if they can be coalesced. coalesced reads or writes occur when several threads can each read or write a value that they need simultaneously, as part of a single memory transaction. [UNK] models of gpus are able to coalesce [UNK] kinds of read or write patterns. typically, memory operations are easier to coalesce if among n threads, thread i
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 461
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
value that they need simultaneously, as part of a single memory transaction. [UNK] models of gpus are able to coalesce [UNK] kinds of read or write patterns. typically, memory operations are easier to coalesce if among n threads, thread i accesses byte i + j of memory, and j is a multiple of some power of 2. the exact specifications [UNK] between models of gpu. another common consideration for gpus is making sure that each thread in a group executes the same instruction simultaneously. this means that branching can be [UNK] on gpu. threads are divided into small groups called warps. each thread in a warp executes the same instruction during each cycle, so if [UNK] threads within the same warp need to execute [UNK] code paths, these [UNK] code paths must be traversed sequentially rather than in parallel. due to the [UNK] of writing high performance gpu code, researchers should structure their workflow to avoid needing to write new gpu code in order to test new models or algorithms. typically, one can do this by building a software library of high performance operations like convolution and matrix multiplication, then specifying models in terms of calls to this library of operations. for example, the machine learning library pylearn
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 461
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
. typically, one can do this by building a software library of high performance operations like convolution and matrix multiplication, then specifying models in terms of calls to this library of operations. for example, the machine learning library pylearn2 ( goodfellow 2013c et al., ) specifies all of its machine learning algorithms in terms of calls to theano (, ; bergstra et al. 2010 bastien 2012 et al., ) and cuda - convnet (, ), which provide these krizhevsky 2010 high - performance operations. this factored approach can also ease support for multiple kinds of hardware. for example, the same theano program can run on either cpu or gpu, without needing to change any of the calls to theano itself. other libraries like tensorflow (, ) and torch (, abadi et al. 2015 collobert et al. 2011b ) provide similar features. 446
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 461
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 12. applications 12. 1. 3 large - scale distributed implementations in many cases, the computational resources available on a single machine are [UNK]. we therefore want to distribute the workload of training and inference across many machines. distributing inference is simple, because each input example we want to process can be run by a separate machine. this is known as. data parallelism it is also possible to get model parallelism, where multiple machines work together on a single datapoint, with each machine running a [UNK] part of the model. this is feasible for both inference and training. data parallelism during training is somewhat harder. we can increase the size of the minibatch used for a single sgd step, but usually we get less than linear returns in terms of optimization performance. it would be better to allow multiple machines to compute multiple gradient descent steps in parallel. unfortunately, the standard definition of gradient descent is as a completely sequential algorithm : the gradient at step is a function of the parameters produced by step. t t −1 this can be solved using asynchronous stochastic gradient descent ( ben - gio 2001 recht 2011 et al., ; et al., ). in this approach, several processor cores share the memory representing the
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 462
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
t −1 this can be solved using asynchronous stochastic gradient descent ( ben - gio 2001 recht 2011 et al., ; et al., ). in this approach, several processor cores share the memory representing the parameters. each core reads parameters without a lock, then computes a gradient, then increments the parameters without a lock. this reduces the average amount of improvement that each gradient descent step yields, because some of the cores overwrite each other ’ s progress, but the increased rate of production of steps causes the learning process to be faster overall. dean et al. ( ) pioneered the multi - machine implementation of this lock - free approach 2012 to gradient descent, where the parameters are managed by a parameter server rather than stored in shared memory. distributed asynchronous gradient descent remains the primary strategy for training large deep networks and is used by most major deep learning groups in industry (, ; chilimbi et al. 2014 wu et al., 2015 ). academic deep learning researchers typically cannot [UNK] the same scale of distributed learning systems but some research has focused on how to build distributed networks with relatively low - cost hardware available in the university setting (, ). coates et al. 2013 12. 1. 4
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 462
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
deep learning researchers typically cannot [UNK] the same scale of distributed learning systems but some research has focused on how to build distributed networks with relatively low - cost hardware available in the university setting (, ). coates et al. 2013 12. 1. 4 model compression in many commercial applications, it is much more important that the time and memory cost of running inference in a machine learning model be low than that the time and memory cost of training be low. for applications that do not require 447
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 462
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 12. applications personalization, it is possible to train a model once, then deploy it to be used by billions of users. in many cases, the end user is more resource - constrained than the developer. for example, one might train a speech recognition network with a powerful computer cluster, then deploy it on mobile phones. a key strategy for reducing the cost of inference is model compression ( bu - [UNK] 2006 et al., ). the basic idea of model compression is to replace the original, expensive model with a smaller model that requires less memory and runtime to store and evaluate. model compression is applicable when the size of the original model is driven primarily by a need to prevent overfitting. in most cases, the model with the lowest generalization error is an ensemble of several independently trained models. evaluating all n ensemble members is expensive. sometimes, even a single model generalizes better if it is large ( for example, if it is regularized with dropout ). these large models learn some function f ( x ), but do so using many more parameters than are necessary for the task. their size is necessary only due to the limited number of training examples. as soon as we have fit this function f ( x ), we can generate a training
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 463
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
x ), but do so using many more parameters than are necessary for the task. their size is necessary only due to the limited number of training examples. as soon as we have fit this function f ( x ), we can generate a training set containing infinitely many examples, simply by applying f to randomly sampled points x. we then train the new, smaller, model to match f ( x ) on these points. in order to most [UNK] use the capacity of the new, small model, it is best to sample the new x points from a distribution resembling the actual test inputs that will be supplied to the model later. this can be done by corrupting training examples or by drawing points from a generative model trained on the original training set. alternatively, one can train the smaller model only on the original training points, but train it to copy other features of the model, such as its posterior distribution over the incorrect classes ( hinton 2014 2015 et al.,, ). 12. 1. 5 dynamic structure one strategy for accelerating data processing systems in general is to build systems that have dynamic structure in the graph describing the computation needed to process an input. data processing systems can dynamically determine which subset of many neural networks should be run on a
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 463
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
5 dynamic structure one strategy for accelerating data processing systems in general is to build systems that have dynamic structure in the graph describing the computation needed to process an input. data processing systems can dynamically determine which subset of many neural networks should be run on a given input. individual neural networks can also exhibit dynamic structure internally by determining which subset of features ( hidden units ) to compute given information from the input. this form of dynamic structure inside neural networks is sometimes called conditional computation (, ;, ). since many components of bengio 2013 bengio et al. 2013b the architecture may be relevant only for a small amount of possible inputs, the 448
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 463
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 12. applications system can run faster by computing these features only when they are needed. dynamic structure of computations is a basic computer science principle applied generally throughout the software engineering discipline. the simplest versions of dynamic structure applied to neural networks are based on determining which subset of some group of neural networks ( or other machine learning models ) should be applied to a particular input. a venerable strategy for accelerating inference in a classifier is to use a cascade of classifiers. the cascade strategy may be applied when the goal is to detect the presence of a rare object ( or event ). to know for sure that the object is present, we must use a sophisticated classifier with high capacity, that is expensive to run. however, because the object is rare, we can usually use much less computation to reject inputs as not containing the object. in these situations, we can train a sequence of classifiers. the first classifiers in the sequence have low capacity, and are trained to have high recall. in other words, they are trained to make sure we do not wrongly reject an input when the object is present. the final classifier is trained to have high precision. at test time, we run inference by running
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 464
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
have high recall. in other words, they are trained to make sure we do not wrongly reject an input when the object is present. the final classifier is trained to have high precision. at test time, we run inference by running the classifiers in a sequence, abandoning any example as soon as any one element in the cascade rejects it. overall, this allows us to verify the presence of objects with high confidence, using a high capacity model, but does not force us to pay the cost of full inference for every example. there are two [UNK] ways that the cascade can achieve high capacity. one way is to make the later members of the cascade individually have high capacity. in this case, the system as a whole obviously has high capacity, because some of its individual members do. it is also possible to make a cascade in which every individual model has low capacity but the system as a whole has high capacity due to the combination of many small models. viola and jones 2001 ( ) used a cascade of boosted decision trees to implement a fast and robust face detector suitable for use in handheld digital cameras. their classifier localizes a face using essentially a sliding window approach in which many windows are examined and rejected if they do not contain
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 464
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
a cascade of boosted decision trees to implement a fast and robust face detector suitable for use in handheld digital cameras. their classifier localizes a face using essentially a sliding window approach in which many windows are examined and rejected if they do not contain faces. another version of cascades uses the earlier models to implement a sort of hard attention mechanism : the early members of the cascade localize an object and later members of the cascade perform further processing given the location of the object. for example, google transcribes address numbers from street view imagery using a two - step cascade that first locates the address number with one machine learning model and then transcribes it with another ( goodfellow 2014d et al., ). decision trees themselves are an example of dynamic structure, because each node in the tree determines which of its subtrees should be evaluated for each input. a simple way to accomplish the union of deep learning and dynamic structure 449
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 464
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 12. applications is to train a decision tree in which each node uses a neural network to make the splitting decision (, ), though this has typically not been guo and gelfand 1992 done with the primary goal of accelerating inference computations. in the same spirit, one can use a neural network, called the gater to select which one out of several expert networks will be used to compute the output, given the current input. the first version of this idea is called the mixture of experts ( nowlan 1990 jacobs 1991, ; et al., ), in which the gater outputs a set of probabilities or weights ( obtained via a softmax nonlinearity ), one per expert, and the final output is obtained by the weighted combination of the output of the experts. in that case, the use of the gater does not [UNK] a reduction in computational cost, but if a single expert is chosen by the gater for each example, we obtain the hard mixture of experts (,, ), which collobert et al. 2001 2002 can considerably accelerate training and inference time. this strategy works well when the number of gating decisions is small because it is not combinatorial. but when we want to select [UNK] subsets of units
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 465
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
, which collobert et al. 2001 2002 can considerably accelerate training and inference time. this strategy works well when the number of gating decisions is small because it is not combinatorial. but when we want to select [UNK] subsets of units or parameters, it is not possible to use a “ soft switch ” because it requires enumerating ( and computing outputs for ) all the gater configurations. to deal with this problem, several approaches have been explored to train combinatorial gaters. ( ) experiment with bengio et al. 2013b several estimators of the gradient on the gating probabilities, while bacon et al. ( ) and ( ) use reinforcement learning techniques ( policy 2015 bengio et al. 2015a gradient ) to learn a form of conditional dropout on blocks of hidden units and get an actual reduction in computational cost without impacting negatively on the quality of the approximation. another kind of dynamic structure is a switch, where a hidden unit can receive input from [UNK] units depending on the context. this dynamic routing approach can be interpreted as an attention mechanism (, ). olshausen et al. 1993 so far, the use of a hard switch has not proven [UNK] on large - scale applications. contemporary approaches
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 465
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
units depending on the context. this dynamic routing approach can be interpreted as an attention mechanism (, ). olshausen et al. 1993 so far, the use of a hard switch has not proven [UNK] on large - scale applications. contemporary approaches instead use a weighted average over many possible inputs, and thus do not achieve all of the possible computational benefits of dynamic structure. contemporary attention mechanisms are described in section. 12. 4. 5. 1 one major obstacle to using dynamically structured systems is the decreased degree of parallelism that results from the system following [UNK] code branches for [UNK] inputs. this means that few operations in the network can be described as matrix multiplication or batch convolution on a minibatch of examples. we can write more specialized sub - routines that convolve each example with [UNK] kernels or multiply each row of a design matrix by a [UNK] set of columns of weights. unfortunately, these more specialized subroutines are [UNK] to implement [UNK]. cpu implementations will be slow due to the lack of cache 450
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 465
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 12. applications coherence and gpu implementations will be slow due to the lack of coalesced memory transactions and the need to serialize warps when members of a warp take [UNK] branches. in some cases, these issues can be mitigated by partitioning the examples into groups that all take the same branch, and processing these groups of examples simultaneously. this can be an acceptable strategy for minimizing the time required to process a fixed amount of examples in an [UNK] setting. in a real - time setting where examples must be processed continuously, partitioning the workload can result in load - balancing issues. for example, if we assign one machine to process the first step in a cascade and another machine to process the last step in a cascade, then the first will tend to be overloaded and the last will tend to be underloaded. similar issues arise if each machine is assigned to implement [UNK] nodes of a neural decision tree. 12. 1. 6 specialized hardware implementations of deep networks since the early days of neural networks research, hardware designers have worked on specialized hardware implementations that could speed up training and / or inference of neural network algorithms. see early and more recent reviews of specialized hardware for deep networks (, ;, lindsey and lindblad
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 466
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
early days of neural networks research, hardware designers have worked on specialized hardware implementations that could speed up training and / or inference of neural network algorithms. see early and more recent reviews of specialized hardware for deep networks (, ;, lindsey and lindblad 1994 beiu et al. 2003 misra and saha 2010 ;, ). [UNK] forms of specialized hardware ( graf and jackel 1989 mead and, ; ismail 2012 kim 2009 pham 2012 chen 2014a b, ; et al., ; et al., ; et al.,, ) have been developed over the last decades, either with asics ( application - specific inte - grated circuit ), either with digital ( based on binary representations of numbers ), analog ( graf and jackel 1989 mead and ismail 2012, ;, ) ( based on physical imple - mentations of continuous values as voltages or currents ) or hybrid implementations ( combining digital and analog components ). in recent years more flexible fpga ( field programmable gated array ) implementations ( where the particulars of the circuit can be written on the chip after it has been built ) have been developed. though software implementations on general - purpose processing units ( cpus and gpus ) typically
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 466
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
field programmable gated array ) implementations ( where the particulars of the circuit can be written on the chip after it has been built ) have been developed. though software implementations on general - purpose processing units ( cpus and gpus ) typically use 32 or 64 bits of precision to represent floating point numbers, it has long been known that it was possible to use less precision, at least at inference time ( holt and baker 1991 holi and hwang 1993 presley, ;, ; and haggard 1994 simard and graf 1994 wawrzynek 1996 savich, ;, ; et al., ; et al., 2007 ). this has become a more pressing issue in recent years as deep learning has gained in popularity in industrial products, and as the great impact of faster hardware was demonstrated with gpus. another factor that motivates current research on specialized hardware for deep networks is that the rate of progress of a single cpu or gpu core has slowed down, and most recent improvements in 451
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 466
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 12. applications computing speed have come from parallelization across cores ( either in cpus or gpus ). this is very [UNK] from the situation of the 1990s ( the previous neural network era ) where the hardware implementations of neural networks ( which might take two years from inception to availability of a chip ) could not keep up with the rapid progress and low prices of general - purpose cpus. building specialized hardware is thus a way to push the envelope further, at a time when new hardware designs are being developed for low - power devices such as phones, aiming for general - public applications of deep learning ( e. g., with speech, computer vision or natural language ). recent work on low - precision implementations of backprop - based neural nets ( vanhoucke 2011 courbariaux 2015 gupta 2015 et al., ; et al., ; et al., ) suggests that between 8 and 16 bits of precision can [UNK] for using or training deep neural networks with back - propagation. what is clear is that more precision is required during training than at inference time, and that some forms of dynamic fixed point representation of numbers can be used to reduce how many bits are required per number. traditional fixed point numbers are restricted to a fixed range (
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 467
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
that more precision is required during training than at inference time, and that some forms of dynamic fixed point representation of numbers can be used to reduce how many bits are required per number. traditional fixed point numbers are restricted to a fixed range ( which corresponds to a given exponent in a floating point representation ). dynamic fixed point representations share that range among a set of numbers ( such as all the weights in one layer ). using fixed point rather than floating point representations and using less bits per number reduces the hardware surface area, power requirements and computing time needed for performing multiplications, and multiplications are the most demanding of the operations needed to use or train a modern deep network with backprop. 12. 2 computer vision computer vision has traditionally been one of the most active research areas for deep learning applications, because vision is a task that is [UNK] for humans and many animals but challenging for computers (, ). many of ballard et al. 1983 the most popular standard benchmark tasks for deep learning algorithms are forms of object recognition or optical character recognition. computer vision is a very broad field encompassing a wide variety of ways of processing images, and an amazing diversity of applications. applications of computer vision range from reproducing human visual
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 467
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
deep learning algorithms are forms of object recognition or optical character recognition. computer vision is a very broad field encompassing a wide variety of ways of processing images, and an amazing diversity of applications. applications of computer vision range from reproducing human visual abilities, such as recognizing faces, to creating entirely new categories of visual abilities. as an example of the latter category, one recent computer vision application is to recognize sound waves from the vibrations they induce in objects visible in a video (, davis et al. 2014 ). most deep learning research on computer vision has not focused on such 452
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 467
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 12. applications exotic applications that expand the realm of what is possible with imagery but rather a small core of ai goals aimed at replicating human abilities. most deep learning for computer vision is used for object recognition or detection of some form, whether this means reporting which object is present in an image, annotating an image with bounding boxes around each object, transcribing a sequence of symbols from an image, or labeling each pixel in an image with the identity of the object it belongs to. because generative modeling has been a guiding principle of deep learning research, there is also a large body of work on image synthesis using deep models. while image synthesis is usually not considered a ex nihilo computer vision endeavor, models capable of image synthesis are usually useful for image restoration, a computer vision task involving repairing defects in images or removing objects from images. 12. 2. 1 preprocessing many application areas require sophisticated preprocessing because the original input comes in a form that is [UNK] for many deep learning architectures to represent. computer vision usually requires relatively little of this kind of pre - processing. the images should be standardized so that their pixels all lie in the same, reasonable range, like [ 0, 1 ] or [ - 1, 1
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 468
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
learning architectures to represent. computer vision usually requires relatively little of this kind of pre - processing. the images should be standardized so that their pixels all lie in the same, reasonable range, like [ 0, 1 ] or [ - 1, 1 ]. mixing images that lie in [ 0, 1 ] with images that lie in [ 0, 255 ] will usually result in failure. formatting images to have the same scale is the only kind of preprocessing that is strictly necessary. many computer vision architectures require images of a standard size, so images must be cropped or scaled to fit that size. even this rescaling is not always strictly necessary. some convolutional models accept variably - sized inputs and dynamically adjust the size of their pooling regions to keep the output size constant ( waibel et al., 1989 ). other convolutional models have variable - sized output that automatically scales in size with the input, such as models that denoise or label each pixel in an image (, ). hadsell et al. 2007 dataset augmentation may be seen as a way of preprocessing the training set only. dataset augmentation is an excellent way to reduce the general
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 468
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
label each pixel in an image (, ). hadsell et al. 2007 dataset augmentation may be seen as a way of preprocessing the training set only. dataset augmentation is an excellent way to reduce the generalization error of most computer vision models. a related idea applicable at test time is to show the model many [UNK] versions of the same input ( for example, the same image cropped at slightly [UNK] locations ) and have the [UNK] instantiations of the model vote to determine the output. this latter idea can be interpreted as an ensemble approach, and helps to reduce generalization error. other kinds of preprocessing are applied to both the train and the test set with the goal of putting each example into a more canonical form in order to reduce the amount of variation that the model needs to account for. reducing the amount of 453
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 468
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 12. applications variation in the data can both reduce generalization error and reduce the size of the model needed to fit the training set. simpler tasks may be solved by smaller models, and simpler solutions are more likely to generalize well. preprocessing of this kind is usually designed to remove some kind of variability in the input data that is easy for a human designer to describe and that the human designer is confident has no relevance to the task. when training with large datasets and large models, this kind of preprocessing is often unnecessary, and it is best to just let the model learn which kinds of variability it should become invariant to. for example, the alexnet system for classifying imagenet only has one preprocessing step : subtracting the mean across training examples of each pixel ( krizhevsky et al., ). 2012 12. 2. 1. 1 contrast normalization one of the most obvious sources of variation that can be safely removed for many tasks is the amount of contrast in the image. contrast simply refers to the magnitude of the [UNK] between the bright and the dark pixels in an image. there are many ways of quantifying the contrast of an image. in the context of deep learning,
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 469
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
is the amount of contrast in the image. contrast simply refers to the magnitude of the [UNK] between the bright and the dark pixels in an image. there are many ways of quantifying the contrast of an image. in the context of deep learning, contrast usually refers to the standard deviation of the pixels in an image or region of an image. suppose we have an image represented by a tensor x ∈rr c × ×3, with xi, j, 1 being the red intensity at row i and column j, xi, j, 2 giving the green intensity and xi, j, 3 giving the blue intensity. then the contrast of the entire image is given by 1 3rc r i = 1 c j = 1 3 k = 1 xi, j, k [UNK] 2 ( 12. 1 ) where [UNK] is the mean intensity of the entire image : [UNK] = 1 3rc r i = 1 c j = 1 3 k = 1 xi, j, k. ( 12. 2 ) global contrast normalization ( gcn ) aims to prevent images from having varying amounts of contrast by subtracting the mean from each image, then rescaling it so that the standard deviation across its pixels is equal to some constant s. this approach is complicated by the fact that
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 469
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##n ) aims to prevent images from having varying amounts of contrast by subtracting the mean from each image, then rescaling it so that the standard deviation across its pixels is equal to some constant s. this approach is complicated by the fact that no scaling factor can change the contrast of a zero - contrast image ( one whose pixels all have equal intensity ). images with very low but non - zero contrast often have little information content. dividing by the true standard deviation usually accomplishes nothing 454
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 469
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 12. applications more than amplifying sensor noise or compression artifacts in such cases. this motivates introducing a small, positive regularization parameter λ to bias the estimate of the standard deviation. alternately, one can constrain the denominator to be at least. given an input image x, gcn produces an output image x, defined such that x i, j, k = s xi, j, k [UNK] max, λ + 1 3rc r i = 1 c j = 1 3 k = 1 xi, j, k [UNK]. ( 12. 3 ) datasets consisting of large images cropped to interesting objects are unlikely to contain any images with nearly constant intensity. in these cases, it is safe to practically ignore the small denominator problem by setting λ = 0 and avoid division by 0 in extremely rare cases by setting to an extremely low value like 10−8. this is the approach used by ( ) on the cifar - 10 goodfellow et al. 2013a dataset. small images cropped randomly are more likely to have nearly constant intensity, making aggressive regularization more useful. ( ) used coates et al. 2011 λ = 0 and = 10 on small, randomly selected patches drawn from ci
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 470
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
2013a dataset. small images cropped randomly are more likely to have nearly constant intensity, making aggressive regularization more useful. ( ) used coates et al. 2011 λ = 0 and = 10 on small, randomly selected patches drawn from cifar - 10. the scale parameter s can usually be set to, as done by ( ), 1 coates et al. 2011 or chosen to make each individual pixel have standard deviation across examples close to 1, as done by ( ). goodfellow et al. 2013a the standard deviation in equation is just a rescaling of the 12. 3 l2 norm of the image ( assuming the mean of the image has already been removed ). it is preferable to define gcn in terms of standard deviation rather than l2 norm because the standard deviation includes division by the number of pixels, so gcn based on standard deviation allows the same s to be used regardless of image size. however, the observation that the l2 norm is proportional to the standard deviation can help build a useful intuition. one can understand gcn as mapping examples to a spherical shell. see figure for an illustration. this can be a 12. 1 useful property because neural networks are often better at responding
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 470
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
to the standard deviation can help build a useful intuition. one can understand gcn as mapping examples to a spherical shell. see figure for an illustration. this can be a 12. 1 useful property because neural networks are often better at responding to directions in space rather than exact locations. responding to multiple distances in the same direction requires hidden units with collinear weight vectors but [UNK] biases. such coordination can be [UNK] for the learning algorithm to discover. additionally, many shallow graphical models have problems with representing multiple separated modes along the same line. gcn avoids these problems by reducing each example to a direction rather than a direction and a distance. counterintuitively, there is a preprocessing operation known as sphering and it is not the same operation as gcn. sphering does not refer to making the data lie on a spherical shell, but rather to rescaling the principal components to have 455
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 470
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 12. applications −1 5 0 0 1 5... x0 −1 5. 0 0. 1 5. x1 raw input −1 5 0 0 1 5... x0 gcn, = 10 λ −2 −1 5 0 0 1 5... x0 gcn, = 0 λ figure 12. 1 : gcn maps examples onto a sphere. ( left ) raw input data may have any norm. ( center ) gcn with λ = 0 maps all non - zero examples perfectly onto a sphere. here we use s = 1 and = 10−8. because we use gcn based on normalizing the standard deviation rather than the l2 norm, the resulting sphere is not the unit sphere. ( right ) regularized gcn, with λ > 0, draws examples toward the sphere but does not completely discard the variation in their norm. we leave and the same as before. s equal variance, so that the multivariate normal distribution used by pca has spherical contours. sphering is more commonly known as. whitening global contrast normalization will often fail to highlight image features we would like to stand out, such as edges and corners. if we have a
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 471
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
normal distribution used by pca has spherical contours. sphering is more commonly known as. whitening global contrast normalization will often fail to highlight image features we would like to stand out, such as edges and corners. if we have a scene with a large dark area and a large bright area ( such as a city square with half the image in the shadow of a building ) then global contrast normalization will ensure there is a large [UNK] between the brightness of the dark area and the brightness of the light area. it will not, however, ensure that edges within the dark region stand out. this motivates local contrast normalization. local contrast normalization ensures that the contrast is normalized across each small window, rather than over the image as a whole. see figure for a comparison of global and local contrast 12. 2 normalization. various definitions of local contrast normalization are possible. in all cases, one modifies each pixel by subtracting a mean of nearby pixels and dividing by a standard deviation of nearby pixels. in some cases, this is literally the mean and standard deviation of all pixels in a rectangular window centered on the pixel to be modified (, ). in other cases, this is a
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 471
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
and dividing by a standard deviation of nearby pixels. in some cases, this is literally the mean and standard deviation of all pixels in a rectangular window centered on the pixel to be modified (, ). in other cases, this is a weighted mean pinto et al. 2008 and weighted standard deviation using gaussian weights centered on the pixel to be modified. in the case of color images, some strategies process [UNK] color 456
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 471
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 12. applications input image gcn lcn figure 12. 2 : a comparison of global and local contrast normalization. visually, the [UNK] of global contrast normalization are subtle. it places all images on roughly the same scale, which reduces the burden on the learning algorithm to handle multiple scales. local contrast normalization modifies the image much more, discarding all regions of constant intensity. this allows the model to focus on just the edges. regions of fine texture, such as the houses in the second row, may lose some detail due to the bandwidth of the normalization kernel being too high. channels separately while others combine information from [UNK] channels to normalize each pixel (, ). sermanet et al. 2012 local contrast normalization can usually be implemented [UNK] by using separable convolution ( see section ) to compute feature maps of local means and 9. 8 local standard deviations, then using element - wise subtraction and element - wise division on [UNK] feature maps. local contrast normalization is a [UNK] operation and can also be used as a nonlinearity applied to the hidden layers of a network, as well as a preprocessing operation applied to the input. as with global contrast normalization, we typically need to
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 472
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
contrast normalization is a [UNK] operation and can also be used as a nonlinearity applied to the hidden layers of a network, as well as a preprocessing operation applied to the input. as with global contrast normalization, we typically need to regularize local contrast normalization to avoid division by zero. in fact, because local contrast normalization typically acts on smaller windows, it is even more important to regularize. smaller windows are more likely to contain values that are all nearly the same as each other, and thus more likely to have zero standard deviation. 457
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 472
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 12. applications 12. 2. 1. 2 dataset augmentation as described in section, it is easy to improve the generalization of a classifier 7. 4 by increasing the size of the training set by adding extra copies of the training examples that have been modified with transformations that do not change the class. object recognition is a classification task that is especially amenable to this form of dataset augmentation because the class is invariant to so many transformations and the input can be easily transformed with many geometric operations. as described before, classifiers can benefit from random translations, rotations, and in some cases, flips of the input to augment the dataset. in specialized computer vision applications, more advanced transformations are commonly used for dataset augmentation. these schemes include random perturbation of the colors in an image (, ) and nonlinear geometric distortions of krizhevsky et al. 2012 the input (, ). lecun et al. 1998b 12. 3 speech recognition the task of speech recognition is to map an acoustic signal containing a spoken natural language utterance into the corresponding sequence of words intended by the speaker. let x = ( x (
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 473
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
). lecun et al. 1998b 12. 3 speech recognition the task of speech recognition is to map an acoustic signal containing a spoken natural language utterance into the corresponding sequence of words intended by the speaker. let x = ( x ( 1 ), x ( 2 ),..., x ( ) t ) denote the sequence of acoustic input vectors ( traditionally produced by splitting the audio into 20ms frames ). most speech recognition systems preprocess the input using specialized hand - designed features, but some (, ) deep learning systems learn features jaitly and hinton 2011 from raw input. let y = ( y1, y2,..., yn ) denote the target output sequence ( usually a sequence of words or characters ). the automatic speech recognition ( asr ) task consists of creating a function f ∗ asr that computes the most probable linguistic sequence given the acoustic sequence : y x f∗ asr ( ) = arg max x y p ∗ ( = ) y x | x ( 12. 4 ) where p ∗is the true conditional distribution relating the inputs x to the targets y. since the 1980s and until about 2009 – 2012, state - of - the art speech recognition systems primarily combined hidden markov
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 473
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
y x | x ( 12. 4 ) where p ∗is the true conditional distribution relating the inputs x to the targets y. since the 1980s and until about 2009 – 2012, state - of - the art speech recognition systems primarily combined hidden markov models ( hmms ) and gaussian mixture models ( gmms ). gmms modeled the association between acoustic features and phonemes (, ), while hmms modeled the sequence of phonemes. bahl et al. 1987 the gmm - hmm model family treats acoustic waveforms as being generated by the following process : first an hmm generates a sequence of phonemes and discrete sub - phonemic states ( such as the beginning, middle, and end of each 458
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 473
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 12. applications phoneme ), then a gmm transforms each discrete symbol into a brief segment of audio waveform. although gmm - hmm systems dominated asr until recently, speech recognition was actually one of the first areas where neural networks were applied, and numerous asr systems from the late 1980s and early 1990s used neural nets ( bourlard and wellekens 1989 waibel 1989 robinson and, ; et al., ; fallside 1991 bengio 1991 1992 konig 1996, ; et al.,, ; et al., ). at the time, the performance of asr based on neural nets approximately matched the performance of gmm - hmm systems. for example, robinson and fallside 1991 ( ) achieved 26 % phoneme error rate on the timit (, ) corpus ( with 39 garofolo et al. 1993 phonemes to discriminate between ), which was better than or comparable to hmm - based systems. since then, timit has been a benchmark for phoneme recognition, playing a role similar to the role mnist plays for object recognition. however, because of the complex engineering involved in software systems for speech recognition and the [UNK] that had been invested in building these systems on the basis of gmm - hmm
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 474
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##me recognition, playing a role similar to the role mnist plays for object recognition. however, because of the complex engineering involved in software systems for speech recognition and the [UNK] that had been invested in building these systems on the basis of gmm - hmms, the industry did not see a compelling argument for switching to neural networks. as a consequence, until the late 2000s, both academic and industrial research in using neural nets for speech recognition mostly focused on using neural nets to learn extra features for gmm - hmm systems. later, with much larger and deeper models and much larger datasets, recognition accuracy was dramatically improved by using neural networks to replace gmms for the task of associating acoustic features to phonemes ( or sub - phonemic states ). starting in 2009, speech researchers applied a form of deep learning based on unsupervised learning to speech recognition. this approach to deep learning was based on training undirected probabilistic models called restricted boltzmann machines ( rbms ) to model the input data. rbms will be described in part. iii to solve speech recognition tasks, unsupervised pretraining was used to build deep feedforward networks whose layers were each initialized by training an rbm. these networks
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 474
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
the input data. rbms will be described in part. iii to solve speech recognition tasks, unsupervised pretraining was used to build deep feedforward networks whose layers were each initialized by training an rbm. these networks take spectral acoustic representations in a fixed - size input window ( around a center frame ) and predict the conditional probabilities of hmm states for that center frame. training such deep networks helped to significantly improve the recognition rate on timit (,, ), bringing down the mohamed et al. 2009 2012a phoneme error rate from about 26 % to 20. 7 %. see ( ) for an mohamed et al. 2012b analysis of reasons for the success of these models. extensions to the basic phone recognition pipeline included the addition of speaker - adaptive features ( mohamed et al., ) that further reduced the error rate. this was quickly followed up 2011 by work to expand the architecture from phoneme recognition ( which is what timit is focused on ) to large - vocabulary speech recognition (, ), dahl et al. 2012 which involves not just recognizing phonemes but also recognizing sequences of words from a large vocabulary. deep networks for speech recognition eventually 459
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 474
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 12. applications shifted from being based on pretraining and boltzmann machines to being based on techniques such as rectified linear units and dropout (, ; zeiler et al. 2013 dahl et al., ). by that time, several of the major speech groups in industry had 2013 started exploring deep learning in collaboration with academic researchers. hinton et al. ( ) describe the breakthroughs achieved by these collaborators, which 2012a are now deployed in products such as mobile phones. later, as these groups explored larger and larger labeled datasets and incorpo - rated some of the methods for initializing, training, and setting up the architecture of deep nets, they realized that the unsupervised pretraining phase was either unnecessary or did not bring any significant improvement. these breakthroughs in recognition performance for word error rate in speech recognition were unprecedented ( around 30 % improvement ) and were following a long period of about ten years during which error rates did not improve much with the traditional gmm - hmm technology, in spite of the continuously growing size of training sets ( see figure 2. 4 of deng and yu 2014 ( ) ). this created a rapid shift in the speech recognition community towards deep
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 475
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
improve much with the traditional gmm - hmm technology, in spite of the continuously growing size of training sets ( see figure 2. 4 of deng and yu 2014 ( ) ). this created a rapid shift in the speech recognition community towards deep learning. in a matter of roughly two years, most of the industrial products for speech recognition incorporated deep neural networks and this success spurred a new wave of research into deep learning algorithms and architectures for asr, which is still ongoing today. one of these innovations was the use of convolutional networks (, sainath et al. 2013 ) that replicate weights across time and frequency, improving over the earlier time - delay neural networks that replicated weights only across time. the new two - dimensional convolutional models regard the input spectrogram not as one long vector but as an image, with one axis corresponding to time and the other to frequency of spectral components. another important push, still ongoing, has been towards end - to - end deep learning speech recognition systems that completely remove the hmm. the first major breakthrough in this direction came from graves 2013 et al. ( ) who trained a deep lstm rnn ( see section ), using map inference over the frame - to - 10.
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 475
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
recognition systems that completely remove the hmm. the first major breakthrough in this direction came from graves 2013 et al. ( ) who trained a deep lstm rnn ( see section ), using map inference over the frame - to - 10. 10 phoneme alignment, as in ( ) and in the ctc framework ( lecun et al. 1998b graves et al., ; 2006 graves 2012 graves 2013, ). a deep rnn ( et al., ) has state variables from several layers at each time step, giving the unfolded graph two kinds of depth : ordinary depth due to a stack of layers, and depth due to time unfolding. this work brought the phoneme error rate on timit to a record low of 17. 7 %. see pascanu 2014a chung 2014 et al. ( ) and et al. ( ) for other variants of deep rnns, applied in other settings. another contemporary step toward end - to - end deep learning asr is to let the system learn how to “ align ” the acoustic - level information with the phonetic - level 460
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 475
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 12. applications information (, ;, ). chorowski et al. 2014 lu et al. 2015 12. 4 natural language processing natural language processing ( nlp ) is the use of human languages, such as english or french, by a computer. computer programs typically read and emit specialized languages designed to allow [UNK] and unambiguous parsing by simple programs. more naturally occurring languages are often ambiguous and defy formal description. natural language processing includes applications such as machine translation, in which the learner must read a sentence in one human language and emit an equivalent sentence in another human language. many nlp applications are based on language models that define a probability distribution over sequences of words, characters or bytes in a natural language. as with the other applications discussed in this chapter, very generic neural network techniques can be successfully applied to natural language processing. however, to achieve excellent performance and to scale well to large applications, some domain - specific strategies become important. to build an [UNK] model of natural language, we must usually use techniques that are specialized for processing sequential data. in many cases, we choose to regard natural language as a sequence of words, rather than a sequence of individual characters or bytes. because the total number of possible
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 476
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
model of natural language, we must usually use techniques that are specialized for processing sequential data. in many cases, we choose to regard natural language as a sequence of words, rather than a sequence of individual characters or bytes. because the total number of possible words is so large, word - based language models must operate on an extremely high - dimensional and sparse discrete space. several strategies have been developed to make models of such a space [UNK], both in a computational and in a statistical sense. 12. 4. 1 - grams n a language model defines a probability distribution over sequences of tokens in a natural language. depending on how the model is designed, a token may be a word, a character, or even a byte. tokens are always discrete entities. the earliest successful language models were based on models of fixed - length sequences of tokens called - grams. an - gram is a sequence of tokens. n n n models based on n - grams define the conditional probability of the n - th token given the preceding n −1 tokens. the model uses products of these conditional distributions to define the probability distribution over longer sequences : p x ( 1,..., xτ ) = ( p x1,...
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 476
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
token given the preceding n −1 tokens. the model uses products of these conditional distributions to define the probability distribution over longer sequences : p x ( 1,..., xτ ) = ( p x1,..., xn−1 ) τ t n = p x ( t | xt n − + 1,..., xt−1 ). ( 12. 5 ) 461
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 476
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 12. applications this decomposition is justified by the chain rule of probability. the probability distribution over the initial sequencep ( x1,..., xn−1 ) may be modeled by a [UNK] model with a smaller value of. n training n - gram models is straightforward because the maximum likelihood estimate can be computed simply by counting how many times each possible n gram occurs in the training set. models based on n - grams have been the core building block of statistical language modeling for many decades ( jelinek and mercer 1980 katz 1987 chen and goodman 1999, ;, ;, ). for small values of n, models have particular names : unigram for n = 1, bigram for n = 2, and trigram for n = 3. these names derive from the latin prefixes for the corresponding numbers and the greek [UNK] “ - gram ” denoting something that is written. usually we train both an n - gram model and an n−1 gram model simultaneously. this makes it easy to compute p x ( t | xt n − + 1,..., xt−1 ) = pn ( xt n − + 1,..., xt ) pn−1 ( xt n − + 1,
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 477
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
compute p x ( t | xt n − + 1,..., xt−1 ) = pn ( xt n − + 1,..., xt ) pn−1 ( xt n − + 1,..., xt−1 ) ( 12. 6 ) simply by looking up two stored probabilities. for this to exactly reproduce inference in pn, we must omit the final character from each sequence when we train pn−1. as an example, we demonstrate how a trigram model computes the probability of the sentence “ the dog ran away. ” the first words of the sentence cannot be handled by the default formula based on conditional probability because there is no context at the beginning of the sentence. instead, we must use the marginal prob - ability over words at the start of the sentence. we thus evaluate p3 ( the dog ran ). finally, the last word may be predicted using the typical case, of using the condi - tional distribution p ( away dog ran | ). putting this together with equation, 12. 6 we obtain : p p ( ) = the dog ran away 3 ( ) the dog ran p3 ( ) dog ran away / p 2 ( )
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 477
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
tional distribution p ( away dog ran | ). putting this together with equation, 12. 6 we obtain : p p ( ) = the dog ran away 3 ( ) the dog ran p3 ( ) dog ran away / p 2 ( ) dog ran. ( 12. 7 ) a fundamental limitation of maximum likelihood for n - gram models is that pn as estimated from training set counts is very likely to be zero in many cases, even though the tuple ( x t n − + 1,..., xt ) may appear in the test set. this can cause two [UNK] kinds of catastrophic outcomes. when pn−1 is zero, the ratio is undefined, so the model does not even produce a sensible output. when pn−1 is non - zero but pn is zero, the test log - likelihood is −∞. to avoid such catastrophic outcomes, most n - gram models employ some form of smoothing. smoothing techniques 462
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 477
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 12. applications shift probability mass from the observed tuples to unobserved ones that are similar. see ( ) for a review and empirical comparisons. one basic chen and goodman 1999 technique consists of adding non - zero probability mass to all of the possible next symbol values. this method can be justified as bayesian inference with a uniform or dirichlet prior over the count parameters. another very popular idea is to form a mixture model containing higher - order and lower - order n - gram models, with the higher - order models providing more capacity and the lower - order models being more likely to avoid counts of zero. back - [UNK] look - up the lower - order n - grams if the frequency of the context xt−1,..., xt n − + 1 is too small to use the higher - order model. more formally, they estimate the distribution over xt by using contexts xt n k − +,..., xt−1, for increasing k, until a [UNK] reliable estimate is found. classical n - gram models are particularly vulnerable to the curse of dimension - ality. there are | | v n possible n - grams and | | v is often very large. even with a massive training set and modest n
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 478
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
reliable estimate is found. classical n - gram models are particularly vulnerable to the curse of dimension - ality. there are | | v n possible n - grams and | | v is often very large. even with a massive training set and modest n, most n - grams will not occur in the training set. one way to view a classical n - gram model is that it is performing nearest - neighbor lookup. in other words, it can be viewed as a local non - parametric predictor, similar to k - nearest neighbors. the statistical problems facing these extremely local predictors are described in section. the problem for a language model 5. 11. 2 is even more severe than usual, because any two [UNK] words have the same dis - tance from each other in one - hot vector space. it is thus [UNK] to leverage much information from any “ neighbors ” — only training examples that repeat literally the same context are useful for local generalization. to overcome these problems, a language model must be able to share knowledge between one word and other semantically similar words. to improve the statistical [UNK] of n - gram models, class - based language models ( brown 1992 ney and kneser 1993 niesler 1998 et al., ;, ; et
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 478
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
share knowledge between one word and other semantically similar words. to improve the statistical [UNK] of n - gram models, class - based language models ( brown 1992 ney and kneser 1993 niesler 1998 et al., ;, ; et al., ) introduce the notion of word categories and then share statistical strength between words that are in the same category. the idea is to use a clustering algorithm to partition the set of words into clusters or classes, based on their co - occurrence frequencies with other words. the model can then use word class ids rather than individual word ids to represent the context on the right side of the conditioning bar. composite models combining word - based and class - based models via mixing or back - [UNK] also possible. although word classes provide a way to generalize between sequences in which some word is replaced by another of the same class, much information is lost in this representation. 463
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 478
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 12. applications 12. 4. 2 neural language models neural language models or nlms are a class of language model designed to overcome the curse of dimensionality problem for modeling natural language sequences by using a distributed representation of words (, ). bengio et al. 2001 unlike class - based n - gram models, neural language models are able to recognize that two words are similar without losing the ability to encode each word as distinct from the other. neural language models share statistical strength between one word ( and its context ) and other similar words and contexts. the distributed representation the model learns for each word enables this sharing by allowing the model to treat words that have features in common similarly. for example, if the word dog and the word cat map to representations that share many attributes, then sentences that contain the wordcat can inform the predictions that will be made by the model for sentences that contain the word dog, and vice - versa. because there are many such attributes, there are many ways in which generalization can happen, transferring information from each training sentence to an exponentially large number of semantically related sentences. the curse of dimensionality requires the model to generalize to a number of sentences that is exponential in the sentence length. the model counters this curse by relating each
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 479
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
transferring information from each training sentence to an exponentially large number of semantically related sentences. the curse of dimensionality requires the model to generalize to a number of sentences that is exponential in the sentence length. the model counters this curse by relating each training sentence to an exponential number of similar sentences. we sometimes call these word representations word embeddings. in this interpretation, we view the raw symbols as points in a space of dimension equal to the vocabulary size. the word representations embed those points in a feature space of lower dimension. in the original space, every word is represented by a one - hot vector, so every pair of words is at euclidean distance √ 2 from each other. in the embedding space, words that frequently appear in similar contexts ( or any pair of words sharing some “ features ” learned by the model ) are close to each other. this often results in words with similar meanings being neighbors. figure zooms in on specific areas of a learned word embedding space to show 12. 3 how semantically similar words map to representations that are close to each other. neural networks in other domains also define embeddings. for example, a hidden layer of a convolutional network provides an “
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 479
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
show 12. 3 how semantically similar words map to representations that are close to each other. neural networks in other domains also define embeddings. for example, a hidden layer of a convolutional network provides an “ image embedding. ” usually nlp practitioners are much more interested in this idea of embeddings because natural language does not originally lie in a real - valued vector space. the hidden layer has provided a more qualitatively dramatic change in the way the data is represented. the basic idea of using distributed representations to improve models for natural language processing is not restricted to neural networks. it may also be used with graphical models that have distributed representations in the form of 464
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 479
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 12. applications multiple latent variables ( mnih and hinton 2007, ). − − − − − 34 32 30 28 26 −14 −13 −12 −11 −10 −9 −8 −7 −6 canada europe ontario north english canadian union african africa british france russian china germany french assembly eu japan iraq south european 35 0 35 5 36 0 36 5 37 0 37 5 38 0....... 17 18 19 20 21 22 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 figure 12. 3 : two - dimensional visualizations of word embeddings obtained from a neural machine translation model (, ), zooming in on specific areas where bahdanau et al. 2015 semantically related words have embedding vectors that are close to each other. countries appear on the left and numbers on the right. keep in mind that these embeddings are 2 - d for the purpose of visualization. in real applications, embeddings typically have higher dimensionality and can simultaneously capture many kinds of similarity between words. 12. 4. 3 high - dimensional outputs in many natural language applications, we often want our models to produce words ( rather than characters ) as the fundamental
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 480
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##beddings typically have higher dimensionality and can simultaneously capture many kinds of similarity between words. 12. 4. 3 high - dimensional outputs in many natural language applications, we often want our models to produce words ( rather than characters ) as the fundamental unit of the output. for large vocabularies, it can be very computationally expensive to represent an output distribution over the choice of a word, because the vocabulary size is large. in many applications, v contains hundreds of thousands of words. the naive approach to representing such a distribution is to apply an [UNK] transformation from a hidden representation to the output space, then apply the softmax function. suppose we have a vocabulary v with size | | v. the weight matrix describing the linear component of this [UNK] transformation is very large, because its output dimension is | | v. this imposes a high memory cost to represent the matrix, and a high computational cost to multiply by it. because the softmax is normalized across all | | v outputs, it is necessary to perform the full matrix multiplication at training time as well as test time — we cannot calculate only the dot product with the weight vector for the correct output. the high computational costs of the output layer thus arise both at training time ( to compute the likelihood
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 480
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
to perform the full matrix multiplication at training time as well as test time — we cannot calculate only the dot product with the weight vector for the correct output. the high computational costs of the output layer thus arise both at training time ( to compute the likelihood and its gradient ) and at test time ( to compute probabilities for all or selected words ). for specialized 465
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 480
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 12. applications loss functions, the gradient can be computed [UNK] (, ), but vincent et al. 2015 the standard cross - entropy loss applied to a traditional softmax output layer poses many [UNK]. suppose that h is the top hidden layer used to predict the output probabilities [UNK]. if we parametrize the transformation from h to [UNK] with learned weights w and learned biases b, then the [UNK] - softmax output layer performs the following computations : ai = bi + j wijhj [UNK] { | | } i 1,..., v, ( 12. 8 ) [UNK] = eai | | v i = 1 eai. ( 12. 9 ) if h contains nh elements then the above operation is o ( | | v n h ). with nh in the thousands and | | v in the hundreds of thousands, this operation dominates the computation of most neural language models. 12. 4. 3. 1 use of a short list the first neural language models (,, ) dealt with the high cost bengio et al. 2001 2003 of using a softmax over a large number of output words by limiting the vocabulary size to 10, 000 or 20, 000 words. schwenk and gauvain 2002
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 481
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
) dealt with the high cost bengio et al. 2001 2003 of using a softmax over a large number of output words by limiting the vocabulary size to 10, 000 or 20, 000 words. schwenk and gauvain 2002 schwenk 2007 ( ) and ( ) built upon this approach by splitting the vocabulary v into a shortlist l of most frequent words ( handled by the neural net ) and a tail t = v l \ of more rare words ( handled by an n - gram model ). to be able to combine the two predictions, the neural net also has to predict the probability that a word appearing after context c belongs to the tail list. this may be achieved by adding an extra sigmoid output unit to provide an estimate of p ( i c ∈ | t ). the extra output can then be used to achieve an estimate of the probability distribution over all words in as follows : v p y i c ( = | ) = 1i∈lp y i c, i p i c ( = | ∈ − l ) ( 1 ( ∈ | t ) ) + 1i∈tp y i c, i p i c ( = | ∈t ) ( ∈ | t ) ( 12. 10 ) where p (
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 481
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
p i c ( = | ∈ − l ) ( 1 ( ∈ | t ) ) + 1i∈tp y i c, i p i c ( = | ∈t ) ( ∈ | t ) ( 12. 10 ) where p ( y = i c, i | ∈l ) is provided by the neural language model and p ( y = i | c, i ∈t ) is provided by the n - gram model. with slight modification, this approach can also work using an extra output value in the neural language model ’ s softmax layer, rather than a separate sigmoid unit. an obvious disadvantage of the short list approach is that the potential gener - alization advantage of the neural language models is limited to the most frequent 466
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 481
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 12. applications words, where, arguably, it is the least useful. this disadvantage has stimulated the exploration of alternative methods to deal with high - dimensional outputs, described below. 12. 4. 3. 2 hierarchical softmax a classical approach (, ) to reducing the computational burden goodman 2001 of high - dimensional output layers over large vocabulary sets v is to decompose probabilities hierarchically. instead of necessitating a number of computations proportional to | | v ( and also proportional to the number of hidden units, nh ), the | | v factor can be reduced to as low as log | | v. ( ) and bengio 2002 morin and bengio 2005 ( ) introduced this factorized approach to the context of neural language models. one can think of this hierarchy as building categories of words, then categories of categories of words, then categories of categories of categories of words, etc. these nested categories form a tree, with words at the leaves. in a balanced tree, the tree has depth o ( log | | v ). the probability of a choosing a word is given by the product of the probabilities of choosing the branch leading to that word at every node on a path from the root of the tree to the leaf containing
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 482
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
depth o ( log | | v ). the probability of a choosing a word is given by the product of the probabilities of choosing the branch leading to that word at every node on a path from the root of the tree to the leaf containing the word. figure illustrates a simple example. ( ) also describe 12. 4 mnih and hinton 2009 how to use multiple paths to identify a single word in order to better model words that have multiple meanings. computing the probability of a word then involves summation over all of the paths that lead to that word. to predict the conditional probabilities required at each node of the tree, we typically use a logistic regression model at each node of the tree, and provide the same context c as input to all of these models. because the correct output is encoded in the training set, we can use supervised learning to train the logistic regression models. this is typically done using a standard cross - entropy loss, corresponding to maximizing the log - likelihood of the correct sequence of decisions. because the output log - likelihood can be computed [UNK] ( as low as log | | v rather than | | v ), its gradients may also be computed [UNK]. this includes not only the gradient with respect to the output parameters but
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 482
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
of decisions. because the output log - likelihood can be computed [UNK] ( as low as log | | v rather than | | v ), its gradients may also be computed [UNK]. this includes not only the gradient with respect to the output parameters but also the gradients with respect to the hidden layer activations. it is possible but usually not practical to optimize the tree structure to minimize the expected number of computations. tools from information theory specify how to choose the optimal binary code given the relative frequencies of the words. to do so, we could structure the tree so that the number of bits associated with a word is approximately equal to the logarithm of the frequency of that word. however, in 467
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 482
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 12. applications ( 1 ) ( 0 ) ( 0, 0, 0 ) ( 0, 0, 1 ) ( 0, 1, 0 ) ( 0, 1, 1 ) ( 1, 0, 0 ) ( 1, 0, 1 ) ( 1, 1, 0 ) ( 1, 1, 1 ) ( 1, 1 ) ( 1, 0 ) ( 0, 1 ) ( 0, 0 ) w0 w0 w1 w1 w2 w2 w3 w3 w4 w4 w5 w5 w6 w6 w7 w7 figure 12. 4 : illustration of a simple hierarchy of word categories, with 8 wordsw0,..., w7 organized into a three level hierarchy. the leaves of the tree represent actual specific words. internal nodes represent groups of words. any node can be indexed by the sequence of binary decisions ( 0 = left, 1 = right ) to reach the node from the root. super - class ( 0 ) contains the classes ( 0, 0 ) ( 0 and, 1 ), which respectively contain the sets of words { w 0, w1 } and { w2, w3 }, and similarly super - class contains the classes ( 1 ) ( 1
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 483
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
the classes ( 0, 0 ) ( 0 and, 1 ), which respectively contain the sets of words { w 0, w1 } and { w2, w3 }, and similarly super - class contains the classes ( 1 ) ( 1, 0 ) ( 1 and, 1 ), which respectively contain the words ( w4, w5 ) ( and w6, w7 ). if the tree is [UNK] balanced, the maximum depth ( number of binary decisions ) is on the order of the logarithm of the number of words | | v : the choice of one out of | | v words can be obtained by doing o ( log | | v ) operations ( one for each of the nodes on the path from the root ). in this example, computing the probability of a word y can be done by multiplying three probabilities, associated with the binary decisions to move left or right at each node on the path from the root to a node y. let bi ( y ) be the i - th binary decision when traversing the tree towards the value y. the probability of sampling an output y decomposes into a product of conditional probabilities, using the chain rule for conditional probabilities, with each node indexed
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 483
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
be the i - th binary decision when traversing the tree towards the value y. the probability of sampling an output y decomposes into a product of conditional probabilities, using the chain rule for conditional probabilities, with each node indexed by the prefix of these bits. for example, node ( 1, 0 ) corresponds to the prefix ( b0 ( w4 ) = 1, b1 ( w4 ) = 0 ), and the probability of w4 can be decomposed as follows : p w ( = y 4 ) = ( p b0 = 1, b1 = 0, b2 = 0 ) ( 12. 11 ) = ( p b0 = 1 ) ( p b1 = 0 | b0 = 1 ) ( p b2 = 0 | b0 = 1, b1 = 0 ). ( 12. 12 ) 468
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 483
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 12. applications practice, the computational savings are typically not worth the [UNK] because the computation of the output probabilities is only one part of the total computation in the neural language model. for example, suppose there are l fully connected hidden layers of width nh. let nb be the weighted average of the number of bits required to identify a word, with the weighting given by the frequency of these words. in this example, the number of operations needed to compute the hidden activations grows as as o ( ln2 h ) while the output computations grow as o ( nhnb ). as long as nb ≤lnh, we can reduce computation more by shrinking nh than by shrinking nb. indeed, nb is often small. because the size of the vocabulary rarely exceeds a million words and log2 ( 106 ) ≈20, it is possible to reduce nb to about, 20 but nh is often much larger, around 103 or more. rather than carefully optimizing a tree with a branching factor of, one can instead define a tree with depth two 2 and a branching factor of | | v. such a tree corresponds to simply defining a set of mutually exclusive word classes. the simple approach based on a tree of depth
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 484
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
factor of, one can instead define a tree with depth two 2 and a branching factor of | | v. such a tree corresponds to simply defining a set of mutually exclusive word classes. the simple approach based on a tree of depth two captures most of the computational benefit of the hierarchical strategy. one question that remains somewhat open is how to best define these word classes, or how to define the word hierarchy in general. early work used existing hierarchies (, ) but the hierarchy can also be learned, ideally morin and bengio 2005 jointly with the neural language model. learning the hierarchy is [UNK]. an exact optimization of the log - likelihood appears intractable because the choice of a word hierarchy is a discrete one, not amenable to gradient - based optimization. however, one could use discrete optimization to approximately optimize the partition of words into word classes. an important advantage of the hierarchical softmax is that it brings computa - tional benefits both at training time and at test time, if at test time we want to compute the probability of specific words. of course, computing the probability of all | | v words will remain expensive even with the hierarchical softmax.
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 484
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
##fits both at training time and at test time, if at test time we want to compute the probability of specific words. of course, computing the probability of all | | v words will remain expensive even with the hierarchical softmax. another important operation is selecting the most likely word in a given context. unfortunately the tree structure does not provide an [UNK] and exact solution to this problem. a disadvantage is that in practice the hierarchical softmax tends to give worse test results than sampling - based methods we will describe next. this may be due to a poor choice of word classes. 12. 4. 3. 3 importance sampling one way to speed up the training of neural language models is to avoid explicitly computing the contribution of the gradient from all of the words that do not appear 469
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 484
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 12. applications in the next position. every incorrect word should have low probability under the model. it can be computationally costly to enumerate all of these words. instead, it is possible to sample only a subset of the words. using the notation introduced in equation, the gradient can be written as follows : 12. 8 ∂ p y c log ( | ) ∂θ = ∂log softmaxy ( ) a ∂θ ( 12. 13 ) = ∂ ∂θ log eay ieai ( 12. 14 ) = ∂ ∂θ ( a y −log i eai ) ( 12. 15 ) = ∂ay ∂θ − i p y i c ( = | ) ∂ai ∂θ ( 12. 16 ) where a is the vector of pre - softmax activations ( or scores ), with one element per word. the first term is the positive phase term ( pushing ay up ) while the second term is the negative phase term ( pushing ai down for all i, with weight p ( i c | ). since the negative phase term is an expectation, we can estimate it with a monte carlo sample. however, that would require sampling from the model itself. sampling from the model requires computing p ( i c | ) for
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 485
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
p ( i c | ). since the negative phase term is an expectation, we can estimate it with a monte carlo sample. however, that would require sampling from the model itself. sampling from the model requires computing p ( i c | ) for all i in the vocabulary, which is precisely what we are trying to avoid. instead of sampling from the model, one can sample from another distribution, called the proposal distribution ( denoted q ), and use appropriate weights to correct for the bias introduced by sampling from the wrong distribution ( bengio and senecal 2003 bengio and senecal 2008, ;, ). this is an application of a more general technique called importance sampling, which will be described in more detail in section. unfortunately, even exact importance sampling is not [UNK] 17. 2 because it requires computing weights pi / qi, where pi = p ( i c | ), which can only be computed if all the scores ai are computed. the solution adopted for this application is called biased importance sampling, where the importance weights are normalized to sum to 1. when negative word ni is sampled, the associated gradient is weighted by wi = pni / qn i n j = 1 pnj / qnj. ( 12. 17 ) these weights are used
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 485
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
are normalized to sum to 1. when negative word ni is sampled, the associated gradient is weighted by wi = pni / qn i n j = 1 pnj / qnj. ( 12. 17 ) these weights are used to give the appropriate importance to the m negative samples from q used to form the estimated negative phase contribution to the 470
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 485
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 12. applications gradient : | | v i = 1 p i c ( | ) ∂ai ∂θ ≈1 m m i = 1 wi ∂ani ∂θ. ( 12. 18 ) a unigram or a bigram distribution works well as the proposal distribution q. it is easy to estimate the parameters of such a distribution from data. after estimating the parameters, it is also possible to sample from such a distribution very [UNK]. importance sampling is not only useful for speeding up models with large softmax outputs. more generally, it is useful for accelerating training with large sparse output layers, where the output is a sparse vector rather than a - of - 1 n choice. an example is a bag of words. a bag of words is a sparse vector v where vi indicates the presence or absence of word i from the vocabulary in the document. alternately, vi can indicate the number of times that word i appears. machine learning models that emit such sparse vectors can be expensive to train for a variety of reasons. early in learning, the model may not actually choose to make the output truly sparse. moreover, the loss function we use for training might most naturally be described in terms of comparing every element of the output to every element of the target. this means that
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 486
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
early in learning, the model may not actually choose to make the output truly sparse. moreover, the loss function we use for training might most naturally be described in terms of comparing every element of the output to every element of the target. this means that it is not always clear that there is a computational benefit to using sparse outputs, because the model may choose to make the majority of the output non - zero and all of these non - zero values need to be compared to the corresponding training target, even if the training target is zero. dauphin 2011 et al. ( ) demonstrated that such models can be accelerated using importance sampling. the [UNK] algorithm minimizes the loss reconstruction for the “ positive words ” ( those that are non - zero in the target ) and an equal number of “ negative words. ” the negative words are chosen randomly, using a heuristic to sample words that are more likely to be mistaken. the bias introduced by this heuristic oversampling can then be corrected using importance weights. in all of these cases, the computational complexity of gradient estimation for the output layer is reduced to be proportional to the number of negative samples rather than proportional to the size of the output vector. 12. 4. 3. 4 noise - contrastive estimation and
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 486
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
all of these cases, the computational complexity of gradient estimation for the output layer is reduced to be proportional to the number of negative samples rather than proportional to the size of the output vector. 12. 4. 3. 4 noise - contrastive estimation and ranking loss other approaches based on sampling have been proposed to reduce the computa - tional cost of training neural language models with large vocabularies. an early example is the ranking loss proposed by collobert and weston 2008a ( ), which views the output of the neural language model for each word as a score and tries to make the score of the correct word ay be ranked high in comparison to the other 471
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 486
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 12. applications scores ai. the ranking loss proposed then is l = i max ( 0 1, −ay + ai ). ( 12. 19 ) the gradient is zero for the i - th term if the score of the observed word, ay, is greater than the score of the negative word ai by a margin of 1. one issue with this criterion is that it does not provide estimated conditional probabilities, which are useful in some applications, including speech recognition and text generation ( including conditional text generation tasks such as translation ). a more recently used training objective for neural language model is noise - contrastive estimation, which is introduced in section. this approach has 18. 6 been successfully applied to neural language models ( mnih and teh 2012 mnih, ; and kavukcuoglu 2013, ). 12. 4. 4 combining neural language models with - grams n a major advantage of n - gram models over neural networks is that n - gram models achieve high model capacity ( by storing the frequencies of very many tuples ) while requiring very little computation to process an example ( by looking up only a few tuples that match the current context ). if we use hash tables or trees to access the counts, the computation used for n -
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 487
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
frequencies of very many tuples ) while requiring very little computation to process an example ( by looking up only a few tuples that match the current context ). if we use hash tables or trees to access the counts, the computation used for n - grams is almost independent of capacity. in comparison, doubling a neural network ’ s number of parameters typically also roughly doubles its computation time. exceptions include models that avoid using all parameters on each pass. embedding layers index only a single embedding in each pass, so we can increase the vocabulary size without increasing the computation time per example. some other models, such as tiled convolutional networks, can add parameters while reducing the degree of parameter sharing in order to maintain the same amount of computation. however, typical neural network layers based on matrix multiplication use an amount of computation proportional to the number of parameters. one easy way to add capacity is thus to combine both approaches in an ensemble consisting of a neural language model and an n - gram language model ( bengio et al.,, ). as with any ensemble, this technique can reduce test error if 2001 2003 the ensemble members make independent mistakes. the field of ensemble learning provides many ways of combining the ensemble members ’ predictions, including uniform weighting and
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 487
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
et al.,, ). as with any ensemble, this technique can reduce test error if 2001 2003 the ensemble members make independent mistakes. the field of ensemble learning provides many ways of combining the ensemble members ’ predictions, including uniform weighting and weights chosen on a validation set. mikolov 2011a et al. ( ) extended the ensemble to include not just two models but a large array of models. it is also possible to pair a neural network with a maximum entropy model and train both jointly ( mikolov 2011b et al., ). this approach can be viewed as training 472
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 487
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 12. applications a neural network with an extra set of inputs that are connected directly to the output, and not connected to any other part of the model. the extra inputs are indicators for the presence of particular n - grams in the input context, so these variables are very high - dimensional and very sparse. the increase in model capacity is huge — the new portion of the architecture contains up to | | sv n parameters — but the amount of added computation needed to process an input is minimal because the extra inputs are very sparse. 12. 4. 5 neural machine translation machine translation is the task of reading a sentence in one natural language and emitting a sentence with the equivalent meaning in another language. machine translation systems often involve many components. at a high level, there is often one component that proposes many candidate translations. many of these translations will not be grammatical due to [UNK] between the languages. for example, many languages put adjectives after nouns, so when translated to english directly they yield phrases such as “ apple red. ” the proposal mechanism suggests many variants of the suggested translation, ideally including “ red apple. ” a second component of the translation system, a language model, evaluates the proposed translations, and can score “ red apple ” as better than “ apple red.
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 488
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
the proposal mechanism suggests many variants of the suggested translation, ideally including “ red apple. ” a second component of the translation system, a language model, evaluates the proposed translations, and can score “ red apple ” as better than “ apple red. ” the earliest use of neural networks for machine translation was to upgrade the language model of a translation system by using a neural language model ( schwenk et al., ; 2006 schwenk 2010, ). previously, most machine translation systems had used an n - gram model for this component. the n - gram based models used for machine translation include not just traditional back - [UNK] - gram models ( jelinek and mercer 1980 katz 1987 chen and goodman 1999, ;, ;, ) but also maximum entropy language models (, ), in which an [UNK] - softmax layer berger et al. 1996 predicts the next word given the presence of frequent - grams in the context. n traditional language models simply report the probability of a natural language sentence. because machine translation involves producing an output sentence given an input sentence, it makes sense to extend the natural language model to be conditional. as described in section, it is straightforward to extend a model 6. 2. 1. 1 that defines a marginal distribution
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 488
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
translation involves producing an output sentence given an input sentence, it makes sense to extend the natural language model to be conditional. as described in section, it is straightforward to extend a model 6. 2. 1. 1 that defines a marginal distribution over some variable to define a conditional distribution over that variable given a contextc, where c might be a single variable or a list of variables. ( ) beat the state - of - the - art in some statistical devlin et al. 2014 machine translation benchmarks by using an mlp to score a phrase t1, t2,..., tk in the target language given a phrase s1, s2,..., sn in the source language. the mlp estimates p ( t1, t2,..., tk | s1, s2,..., sn ). the estimate formed by this mlp replaces the estimate provided by conditional - gram models. n 473
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 488
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
chapter 12. applications decoder output object ( english sentence ) intermediate, semantic representation source object ( french sentence or image ) encoder figure 12. 5 : the encoder - decoder architecture to map back and forth between a surface representation ( such as a sequence of words or an image ) and a semantic representation. by using the output of an encoder of data from one modality ( such as the encoder mapping from french sentences to hidden representations capturing the meaning of sentences ) as the input to a decoder for another modality ( such as the decoder mapping from hidden representations capturing the meaning of sentences to english ), we can train systems that translate from one modality to another. this idea has been applied successfully not just to machine translation but also to caption generation from images. a drawback of the mlp - based approach is that it requires the sequences to be preprocessed to be of fixed length. to make the translation more flexible, we would like to use a model that can accommodate variable length inputs and variable length outputs. an rnn provides this ability. section describes several ways 10. 2. 4 of constructing an rnn that represents a conditional distribution over a sequence given some input, and section describes how
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
| 489
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.