page_no int64 1 474 | page_content stringlengths 160 3.83k |
|---|---|
1 | page_content='MANNINGMohamed Elgendy' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 0} |
2 | page_content='Deep Learning for\nVision Systems\nMOHAMED ELGENDY\nMANNING\nSHELTER ISLAND' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 1} |
3 | page_content='For online information and ordering of this and other Manning books, please visit\nwww.manning.com . The publisher offers discounts on this book when ordered in quantity. \nFor more information, please contact\nSpecial Sales Department\nManning Publications Co.\n20 Baldwin Road\nPO Box 761\nShelter Island... |
4 | page_content='To my mom, Huda, who taught me perseverance and kindness\nTo my dad, Ali, who taught me patience and purpose\nTo my loving and supportive wife, Amanda, who always inspires me to keep climbing\nTo my two-year-old daughter, Emily, who teaches me every day that AI still has \na long way to go to catch up wit... |
5 | page_content='vcontents\npreface xiii\nacknowledgments xv\nabout this book xvi\nabout the author xix\nabout the cover illustration xx\nPART 1DEEP LEARNING FOUNDATION ............................. 1\n1 Welcome to computer vision 3\n1.1 Computer vision 4\nWhat is visual perception? 5■Vision systems 5\nSensing devices 7■... |
6 | page_content='CONTENTS vi\n1.5 Image preprocessing 23\nConverting color images to grayscale to reduce computation \ncomplexity 23\n1.6 Feature extraction 27\nWhat is a feature in computer vision? 27■What makes a good \n(useful) feature? 28■Extracting features (handcrafted vs. \nautomatic extracting) 31\n1.7 Classifier ... |
7 | page_content='CONTENTS vii\n3 Convolutional neural networks 92\n3.1 Image classification using MLP 93\nInput layer 94■Hidden layers 96■Output layer 96\nPutting it all together 97■Drawbacks of MLPs for processing \nimages 99\n3.2 CNN architecture 102\nThe big picture 102■A closer look at feature extraction 104\nA closer... |
8 | page_content='CONTENTS viii\n4.5 Improving the network and tuning hyperparameters 162\nCollecting more data vs. tuning hyperparameters 162\nParameters vs. hyperparameters 163■Neural network \nhyperparameters 163■Network architecture 164\n4.6 Learning and optimization 166\nLearning rate and decay schedule 166■A systemat... |
9 | page_content='CONTENTS ix\n5.5 Inception and GoogLeNet 217\nNovel features of Inception 217■Inception module: Naive \nversion 218■Inception module with dimensionality \nreduction 220■Inception architecture 223■GoogLeNet in \nKeras 225■Learning hyperparameters 229■Inception \nperformance on the CIFAR dataset 229\n5.6 Re... |
10 | page_content='CONTENTS x\n7.2 Region-based convolutional neural networks \n(R-CNNs) 292\nR-CNN 293■Fast R-CNN 297■Faster R-CNN 300\nRecap of the R-CNN family 308\n7.3 Single-shot detector (SSD) 310\nHigh-level SSD architecture 311■Base network 313\nMulti-scale feature layers 315■Non-maximum \nsuppression 319\n7.4 You o... |
11 | page_content='CONTENTS xi\n9.2 DeepDream 384\nHow the DeepDream algorithm works 385■DeepDream \nimplementation in Keras 387\n9.3 Neural style transfer 392\nContent loss 393■Style loss 396■Total variance loss 397\nNetwork training 397\n10 Visual embeddings 400\n10.1 Applications of visual embeddings 402\nFace recognitio... |
12 | page_content='xiiipreface\nTwo years ago, I decided to write a book to teach deep learning for computer vision\nfrom an intuitive perspective. My goal was to develop a comprehensive resource\nthat takes learners from knowing only the basics of machine learning to building\nadvanced deep learning algorithms that they c... |
13 | page_content='PREFACE xiv\n As a beginner, I searched but couldn’t find anything to meet these needs. So now\nI’ve written it. My goal has been to write a book that not only teaches the content I\nwanted when I was starting out, but also levels up your ability to learn on your own.\n My solution is a comprehensive boo... |
14 | page_content='xvacknowledgments\nThis book was a lot of work. No, make that really a lot of work! But I hope you will find it\nvaluable. There are quite a few people I’d like to thank for helping me along the way. \n I would like to thank the people at Manning who made this book possible: pub-\nlisher Marjan Bace and e... |
15 | page_content='xviabout this book\nWho should read this book\nIf you know the basic machine learning framework, can hack around in Python, and\nwant to learn how to build and train advanced, production-ready neural networks to\nsolve complex computer vision problems, I wrote this book for you. The book was\nwritten for ... |
16 | page_content='ABOUT THIS BOOK xvii\ndoesn’t interrupt your flow of understanding the concepts without the math\npart if you prefer. \nHow this book is organized: A roadmap\nThis book is structured into three parts. The first part explains deep leaning in detail\nas a foundation for the remaining topics. I strongly reco... |
17 | page_content='ABOUT THIS BOOK xviii\naccess the forum, go to https:/ /livebook.manning.com/#!/book/deep-learning-for-\nvision-systems/discussion . You can also learn more about Manning’s forums and the\nrules of conduct at https:/ /livebook.manning.com/#!/discussion .\n Manning’s commitment to our readers is to provide... |
18 | page_content='xixabout the author\nMohamed Elgendy is the vice president of engineering at Rakuten, where he is lead-\ning the development of its AI platform and products. Previously, he served as head of\nengineering at Synapse Technology, building proprietary computer vision applica-\ntions to detect threats at secu... |
19 | page_content='xxabout the cover illustration\nThe figure on the cover of Deep Learning for Vision Systems depicts Ibn al-Haytham, an\nArab mathematician, astronomer, and physicist who is often referred to as “the father\nof modern optics” due to his significant contributions to the principles of optics and\nvisual per... |
20 | page_content='Part 1\nDeep learning foundation\nC omputer vision is a technological area that’s been advancing rapidly\nthanks to the tremendous advances in artificial intelligence and deep learning\nthat have taken place in the past few years. Neural networks now help self-driving\ncars to navigate around other cars, ... |
21 | page_content='3Welcome to\ncomputer vision\nHello! I’m very excited that you are here. You are making a great decision—to\ngrasp deep learning (DL) and computer vision (CV). The timing couldn’t be more\nperfect. CV is an area that’s been advancing rapidly, thanks to the huge AI and DL\nadvances of recent years. Neural ... |
22 | page_content='4 CHAPTER 1Welcome to computer vision\nobjects—DL has given computers the power to imagine and create new things like art-\nwork; new objects; and even unique, realistic human faces. \n The main reason that I’m excited about deep learning for computer vision, and\nw hat d re w m e t o thi s fi e ld, i... |
23 | page_content='5 Computer vision\n1.1.1 What is visual perception?\nVisual perception , at its most basic, is the act of observing patterns and objects through\nsight or visual input. With an autonomous vehicle, for example, visual perception means\nunderstanding the surrounding objects and their specific details—such a... |
24 | page_content='6 CHAPTER 1Welcome to computer vision\nand detect objects in this image because we have been trained over the years to iden-\ntify dogs. \n Suppose someone shows you a picture of a dog for the first time—you definitely\ndon’t know what it is. Then they tell you that this is a dog. After a couple experime... |
25 | page_content='7 Computer vision\n1.1.3 Sensing devices\nVision systems are designed to fulfill a specific task. An important aspect of design is\nselecting the best sensing device to capture the surroundings of a specific environ-\nment, whether that is a camera, radar, X-ray, CT scan, Lidar, or a combination of\ndevic... |
26 | page_content='8 CHAPTER 1Welcome to computer vision\n1.1.4 Interpreting devices\nComputer vision algorithms are typically employed as interpreting devices. The inter-\npreter is the brain of the vision system. Its role is to take the output image from the\nsensing device and learn features and patterns to identify obj... |
27 | page_content='9 Computer vision\nDL methods learn representations through a sequence of transformations of data\nthrough layers of neurons. In this book, we will explore different DL architectures,\nsuch as ANNs and convolutional neural networks, and how they are used in CV\napplications. Biological neuron Artificial ne... |
28 | page_content='10 CHAPTER 1Welcome to computer vision\nCAN MACHINE LEARNING ACHIEVE BETTER PERFORMANCE THAN THE HUMAN BRAIN ?\nWell, if you had asked me this question 10 years ago, I would’ve probably said no,\nmachines cannot surpass the accuracy of a human. But let’s take a look at the follow-\ning two scenario... |
29 | page_content='11 Applications of computer vision\nlate stages. When diagnosing lung cancer, doctors typically use their eyes to\nexamine CT scan images, looking for small nodules in the lungs. In the early\nstages, the nodules are usually very small and hard to spot. Several CV compa-\nnies decided to tackle this chall... |
30 | page_content='12 CHAPTER 1Welcome to computer vision\n1.2.2 Object detection and localization\nImage classification problems are the most basic applications for CNNs. In these prob-\nlems, each image contains only one object, and our task is to identify it. But if we aim to\nreach human levels of understanding, we hav... |
31 | page_content='13 Applications of computer vision\n1.2.4 Creating images \nAlthough the earlier examples are truly impressive CV applications of AI, this is\nwhere I see the real magic happening: the magic of creation. In 2014, Ian Good-\nfellow invented a new DL model that can imagine new things called generative\nadve... |
32 | page_content='14 CHAPTER 1Welcome to computer vision\nconsidered a major advancement in DL. So when you understand CNNs, GANs will\nmake a lot more sense to you. \n GANs are sophisticated DL models that generate stunningly accurate synthesized\nimages of objects, people, and places, among other things. If you give the... |
33 | page_content='15 Applications of computer vision\n1.2.5 Face recognition \nFace recognition (FR) allows us to exactly identify or tag an image of a person. Day-to-\nday applications include searching for celebrities on the web and auto-tagging friends\nand family in images. Face recognition is a form of fine-grained cl... |
34 | page_content='16 CHAPTER 1Welcome to computer vision\nFace verification\nFace\nverification\nsystemPerson\n1Person\n1\nPerson\n2\nNot\nperson\n1Face identification\nFace\nidentification\nsystem\nHaven’t\nseen her\nbefore\nFigure 1.10 Example of face verification (left) and face recognition (right)\nFigure 1.11 Apparel sea... |
35 | page_content='17 Computer vision pipeline: The big picture\n1.3 Computer vision pipeline: The big picture\nOkay, now that I have your attention, let’s dig one level deeper into CV systems.\nRemember that earlier in this chapter, we discussed how vision systems are composed\nof two main components: sensing devices and i... |
36 | page_content='18 CHAPTER 1Welcome to computer vision\nDEFINITIONS An image classifier is an algorithm that takes in an image as input\nand outputs a label or “class” that identifies that image. A class (also called a\ncategory ) in machine learning is the output category of your data.\nHere is how the image flows thr... |
37 | page_content='19 Image input\ndThe object has only two wheels; this is closer to a motorcycle.\neAnd you keep going through all the features like the body shape, pedal, and\nso on, until you arrive at a best guess of the object in the image.\nThe output of this process is the probability of each class. As you can see i... |
38 | page_content='20 CHAPTER 1Welcome to computer vision\nThe image in figure 1.14 has a size of 32 × 16. This means the dimensions of the image\nare 32 pixels wide and 16 pixels tall. The x-axis goes from 0 to 31, and the y-axis from\n0 to 16. Overall, the image has 512 (32 × 16) pixels. In this grayscale image, each pix... |
39 | page_content='21 Image input\n1.4.2 How computers see images\nWhen we look at an image, we see objects, landscape, colors, and so on. But that’s not\nthe case with computers. Consider figure 1.16. Your human brain can process it and\nimmediately know that it is a picture of a motorcycle. To a computer, the image looks\... |
40 | page_content='22 CHAPTER 1Welcome to computer vision\nimage has three numbers (0 to 255) associated with it. These numbers represent\nintensity of red, green, and blue color in that particular pixel.\n If we take the pixel (0,0) as an example, we will see that it represents the top-left\npixel of the image of green gr... |
41 | page_content='23 Image preprocessing\n1.5 Image preprocessing\nIn machine learning (ML) projects, you usually go through a data preprocessing or\ncleaning step. As an ML engineer, you will spend a good amount of time cleaning up\nand preparing the data before you build your learning model. The goal of this step is\nto ... |
42 | page_content='24 CHAPTER 1Welcome to computer vision\nnecessary to recognize and interpret an image. Grayscale can be good enough for rec-\nognizing certain objects. Since color images contain more information than black-\nand-white images, they can add unnecessary complexity and take up more space in\nmemory. Remembe... |
43 | page_content='25 Image preprocessing\nyour images. This makes it more likely that your model will recognize objects\nwhen they appear in any form and shape. Figure 1.21 shows an example of image\naugmentation applied to a butterfly image.\n\uf0a1Other techniques —Many more preprocessing techniques are available to get ... |
44 | page_content='26 CHAPTER 1Welcome to computer vision\nthe appropriate processing techniques based on the dataset at hand and the\nproblem you are solving. You will see many image-processing techniques through-\nout this book, helping you build your intuition of which ones you need when\nworking on your own projects.\n... |
45 | page_content='27 Feature extraction\n1.6 Feature extraction\nFeature extraction is a core component of the CV pipeline. In fact, the entire DL model\nworks around the idea of extracting useful features that clearly define the objects in\nthe image. So we’ll spend a little more time here, because it is important that y... |
46 | page_content='28 CHAPTER 1Welcome to computer vision\n1.6.2 What makes a good (useful) feature?\nMachine learning models are only as good as the features you provide. That means\ncoming up with good features is an important job in building ML models. But what\nmakes a good feature? And how can you tell? Feature genera... |
47 | page_content='29 Feature extraction\n Let’s discuss this with an example. Suppose we want to build a classifier to tell the dif-\nference between two types of dogs: Greyhound and Labrador. Let’s take two features—\nthe dogs’ height and their eye color—and evaluate them (figure 1.23).\nLet’s begin with height. How usefu... |
48 | page_content='30 CHAPTER 1Welcome to computer vision\nthe dog is a Greyhound. Now, what about the data in the middle of the histogram\n(heights from 20 to 30 inches)? We can see that the probability of each type of dog is\npretty close. The thought process in this case is as follows:\nif height ≤ 20:\n return h... |
49 | page_content='31 Feature extraction\nIt is clear that for most values, the distribution is about 50/50 for both types. So practi-\ncally, this feature tells us nothing, because it doesn’t correlate with the type of dog.\nHence, it doesn’t distinguish between Greyhounds and Labradors.\n1.6.3 Extracting features (handcra... |
50 | page_content='32 CHAPTER 1Welcome to computer vision\nDEEP LEARNING USING AUTOMATICALLY EXTRACTED FEATURES\nIn DL, however, we do not need to manually extract features from the image. The net-\nwork extracts features automatically and learns their importance on the output by\napplying weights to its connections. Y... |
51 | page_content='33 Classifier learning algorithm\nWHY USE FEATURES ?\nThe input image has too much extra information that is not necessary for classifica-\ntion. Therefore, the first step after preprocessing the image is to simplify it by extract-\ning the important information and throwing away nonessential information.... |
52 | page_content='34 CHAPTER 1Welcome to computer vision\nNow it is time to feed the extracted feature vector to the classifier to output a class\nlabel for the images (for example, motorcycle or otherwise). \n As we discussed in the previous section, the classification task is done one of these\nways: traditional ML algo... |
53 | page_content='35 Summary\n\uf0a1An image can be represented as a function of x and y. Computers see an image\nas a matrix of pixel values: one channel for grayscale images and three channels\nfor color images.\n\uf0a1Image-processing techniques vary for each problem and dataset. Some of these\ntechniques are converting... |
54 | page_content='36Deep learning\nand neural networks\nIn the last chapter, we discussed the computer vision (CV) pipeline components:\nthe input image, preprocessing, extracting features, and the learning algorithm\n(classifier). We also discussed that in traditional ML algorithms, we manually\nextract features that prod... |
55 | page_content='37 Understanding perceptrons\nback to CV applications with one of the most popular DL architectures: convolutional\nneural networks. \n The high-level layout of this chapter is as follows:\n\uf0a1We will begin with the most basic component of the neural network: the perceptron ,\na neural network that con... |
56 | page_content='38 CHAPTER 2Deep learning and neural networks\nalso called a multilayer perceptron , which is more intuitive because it implies that the net-\nwork consists of perceptrons structured in multiple layers. Both terms, MLP and ANN,\nare used interchangeably to describe this neural network architecture.\n In ... |
57 | page_content='39 Understanding perceptrons\nIn the perceptron diagram in figure 2.4, you can see the following:\n\uf0a1Input vector —The feature vector that is fed to the neuron. It is usually denoted\nwith an uppercase X to represent a vector of inputs ( x1, x2, . . ., xn).\n\uf0a1Weights vector —Each x1 is assigned a... |
58 | page_content='40 CHAPTER 2Deep learning and neural networks\n\uf0a1Neuron functions —The calculations performed within the neuron to modulate\nthe input signals: the weighted sum and step activation function.\n\uf0a1Output —Controlled by the type of activation function you choose for your net-\nwork. There are differe... |
59 | page_content='41 Understanding perceptrons\nWhat is a bias in the perceptron, and why do we add it? \nLet’s brush up our memory on some linear algebra concepts to help understand\nwhat’s happening under the hood. Here is the function of the straight line:\nThe function of a straight line is represented by the equation ... |
60 | page_content="42 CHAPTER 2Deep learning and neural networks\nSTEP ACTIVATION FUNCTION\nIn both artificial and biological neural networks, a neuron does not just output the\nbare input it receives. Instead, there is one more step, called an activation function ; this\nis the decision-making unit of the brain. In ANNs,... |
61 | page_content='43 Understanding perceptrons\n2.1.2 How does the perceptron learn?\nThe perceptron uses trial and error to learn from its mistakes. It uses the weights as\nknobs by tuning their values up and down until the network is trained (figure 2.6).\nThe perceptron’s learning logic goes like this:\n1The neuron calc... |
62 | page_content='44 CHAPTER 2Deep learning and neural networks\nperceptron to predict whether players will be accepted based on only two features\n(height and weight). The trained perceptron will find the best weights and bias values\nto produce the straight line that best separates the accepted from non-accepted (best\... |
63 | page_content='45 Multilayer perceptrons\ntraining data. In fact, if we add too many neurons, this will make the network overfit\nthe training data (not good). But we will talk about this later. The general rule here is\nthat the more complex our network is, the better it learns the features of our data. \n2.2 Multilaye... |
64 | page_content='46 CHAPTER 2Deep learning and neural networks\nTo split a nonlinear dataset, we need more than one line. This means we need to\ncome up with an architecture to use tens and hundreds of neurons in our neural net-\nwork. Let’s look at the example in figure 2.9. Remember that a perceptron is a linear\nfunct... |
65 | page_content='47 Multilayer perceptrons\n\uf0a1Output layer —We get the answer or prediction from our model from the output\nlayer. Depending on the setup of the neural network, the final output may be a\nreal-valued output (regression problem) or a set of probabilities (classification\nproblem). This is determined by ... |
66 | page_content='48 CHAPTER 2Deep learning and neural networks\nwell as recommend some starting points. The number of layers and the number of\nneurons in each layer are among the important hyperparameters you will be design-\ning when working with neural networks.\n A network can have one or more hidden layers (technica... |
67 | page_content='49 Multilayer perceptrons\nIn later chapters, we will discuss other variations of neural network architecture (like\nconvolutional and recurrent networks). For now, know that this is the most basic neu-\nral network architecture, and it can be referred to by any of these names: ANN, MLP,\nfully connected ... |
68 | page_content='50 CHAPTER 2Deep learning and neural networks\n2.2.4 Some takeaways from this section \nLet’s recap what we’ve discussed so far:\n\uf0a1We talked about the analogy between biological and artificial neurons: both\nhave inputs and a neuron that does some calculations to modulate the input\nsignals and crea... |
69 | page_content='51 Activation functions\nmini-batch gradient descent. Adam and RMSprop are two other popular opti-\nmizers that we don’t discuss. \n\uf0a1Batch size —Mini-batch size is the number of sub-samples given to the network,\nafter which parameter update happens. Bigger batch sizes learn faster but\nrequire more ... |
70 | page_content='52 CHAPTER 2Deep learning and neural networks\n Why use activation functions at all? Why not just calculate the weighted sum of our\nnetwork and propagate that through the hidden layers to produce an output?\n The purpose of the activation function is to introduce nonlinearity into the net-\nwork. Withou... |
71 | page_content="53 Activation functions\n2.3.1 Linear transfer function \nA linear transfer function , also called an identity function , indicates that the function\npasses a signal through unchanged. In practical terms, the output will be equal to the\ninput, which means we don’t actually have an activation function. S... |
72 | page_content="54 CHAPTER 2Deep learning and neural networks\n2.3.2 Heaviside step function (binary classifier)\nThe step function produces a binary output. It basically says that if the input x > 0, it\nfires (output y = 1); else (input < 0), it doesn’t fire (output y = 0). It is mainly used in\nbinary classification... |
73 | page_content='55 Activation functions\n2.3.3 Sigmoid/logistic function\nThis is one of the most common activation functions. It is often used in binary classifi-\ners to predict the probability of a class when you have two classes. The sigmoid squishes\nall the values to a probability between 0 and 1, which reduces ex... |
74 | page_content='56 CHAPTER 2Deep learning and neural networks\nJust-in-time linear algebra (optional)\nLet’s take a deeper dive into the math side of the sigmoid function to understand the\nproblem it helps solve and how the sigmoid function equation is driven. Suppose that\nwe are trying to predict whether patients hav... |
75 | page_content='57 Activation functions\n2.3.4 Softmax function\nThe softmax function is a generalization of the sigmoid function. It is used to obtain\nclassification probabilities when we have more than two classes. It forces the outputs\nof a neural network to sum to 1 (for example, 0 < output < 1). A very common use\... |
76 | page_content='58 CHAPTER 2Deep learning and neural networks\nTIP Softmax is the go-to function that you will often use at the output layer of\na classifier when you are working on a problem where you need to predict a\nclass between more than two classes. Softmax works fine if you are classifying\ntwo classes, as well... |
77 | page_content='59 Activation functions\nAt the time of writing, ReLU is considered the state-of-the-art activation function\nbecause it works well in many different situations, and it tends to train better than sig-\nmoid and tanh in hidden layers (figure 2.18).\nHere is how ReLU is implemented in Python:\ndef relu(x): ... |
78 | page_content='60 CHAPTER 2Deep learning and neural networks\nHere is how Leaky ReLU is implemented in Python:\ndef leaky_relu(x): \n if x < 0:\n return x * 0.01\n else:\nreturn x\nTable 2.1 summarizes the various activation functions we’ve discussed in this section. \nTable 2.1 A cheat sheet of the mos... |
79 | page_content='61 Activation functions\nSigmoid/\nlogistic \nfunctionSquishes all the \nvalues to a probabil-\nity between 0 and 1, \nwhich reduces \nextreme values \nor outliers in the \ndata. Usually \nused to classify \ntwo classes.σ(z) = \nSoftmax \nfunctionA generalization of \nthe sigmoid func-\ntion. Used to obta... |
80 | page_content='62 CHAPTER 2Deep learning and neural networks\n2.4 The feedforward process\nNow that you understand how to stack perceptrons in layers, connect them with\nweights/edges, perform a weighted sum function, and apply activation functions, let’s\nimplement the complete forward-pass calculations to produce a p... |
81 | page_content='63 The feedforward process\n\uf0a1Weights and biases (w, b) —The edges between nodes are assigned random\nweights denoted as Wab(n), where ( n) indicates the layer number and ( ab) indi-\ncates the weighted edge connecting the ath neuron in layer ( n) to the bth neu-\nron in the previous layer ( n – 1). F... |
82 | page_content='64 CHAPTER 2Deep learning and neural networks\n2.4.1 Feedforward calculations\nWe have all we need to start the feedforward calculations:\na1(1) = σ(w11(1)x1 + w21(1)x2 + w31(1)x3)\na2(1) = σ(w12(1)x1 + w22(1)x2 + w32(1)x3)\na3(1) = σ(w13(1)x1 + w23(1)x2 + w33(1)x3)\nThen we do the same calculations for ... |
83 | page_content='65 The feedforward process\n2.4.2 Feature learning\nThe nodes in the hidden layers ( ai) are the new features that are learned after each\nlayer. For example, if you look at figure 2.20, you see that we have three feature\ninputs ( x1, x2, and x3). After computing the forward pass in the first layer, the ... |
84 | page_content='66 CHAPTER 2Deep learning and neural networks\nThat is how a neural network learns new features: via the network’s hidden layers.\nFirst, they recognize patterns in the data. Then, they recognize patterns within patterns;\nthen patterns within patterns within patterns, and so on. The deeper the network i... |
85 | page_content='67 The feedforward process\nVectors and matrices refresher\nIf you understood the matrix calculations we just did in the feedforward discussion, feel\nfree to skip this sidebar. If you are still not convinced, hang tight: this sidebar is for you.\nThe feedforward calculations are a set of matrix multiplic... |
86 | page_content='68 CHAPTER 2Deep learning and neural networks\n2.5 Error functions \nSo far, you have learned how to implement the forward pass in neural networks to\nproduce a prediction that consists of the weighted sum plus activation operations.\nNow, how do we evaluate the prediction that the network just produced?... |
87 | page_content='69 Error functions\n2.5.1 What is the error function? \nThe error function is a measure of how “wrong” the neural network prediction is with\nrespect to the expected output (the label). It quantifies how far we are from the cor-\nrect solution. For example, if we have a high loss, then our model is not d... |
88 | page_content='70 CHAPTER 2Deep learning and neural networks\n2.5.4 Mean square error \nMean squared error (MSE) is commonly used in regression problems that require the\noutput to be a real value (like house pricing). Instead of just comparing the predic-\ntion output with the label ( yˆi – yi), the error is squared a... |
89 | page_content='71 Error functions\nmean absolute error (MAE) was developed just for this purpose. It averages the absolute\nerror over the entire dataset without taking the square of the error:\nE(W, b) = | yˆi – yi|\n2.5.5 Cross-entropy\nCross-entropy is commonly used in classification problems because it quantifies ... |
90 | page_content='72 CHAPTER 2Deep learning and neural networks\nwhere ( y) is the target probability, ( p) is the predicted probability, and ( m) is the num-\nber of classes. The sum is over the three classes: cat, dog, and fish. In this case, the loss\nis 1.2:\nE = - (0.0 * log(0.2) + 1.0 * log(0.3) + 0.0 * log(0.5)) = ... |
91 | page_content='73 Error functions\nSuppose the input x = 0.3, and its label (goal prediction) y = 0.8. The prediction out-\nput ( yˆ) of this perception is calculated as follows:\nyˆi = w · x = w · 0.3\nAnd the error, in its simplest form, is calculated by comparing the prediction yˆ and\nthe label y:\nerror = | yˆ – y|... |
92 | page_content='74 CHAPTER 2Deep learning and neural networks\n2.6 Optimization algorithms\nTraining a neural network involves showing the network many examples (a training\ndataset); the network makes predictions through feedforward calculations and com-\npares them with the correct labels to calculate the error. Final... |
93 | page_content='75 Optimization algorithms\nSince we humans are only equipped to understand a maximum of 3 dimensions, it is\nimpossible for us to visualize error graphs when we have 10 weights, not to mention\nhundreds or thousands of weight parameters. So, from this point on, we will study the\nerror function using the... |
94 | page_content='76 CHAPTER 2Deep learning and neural networks\nhave very few inputs and only one or two neurons in our network. Let me try to con-\nvince you that this approach wouldn’t scale. Let’s take a look at a scenario where we\nhave a very simple neural network. Suppose we want to predict house prices based on\no... |
95 | page_content='77 Optimization algorithms\noperations per second (FLOPs). In the best-case scenario, this supercomputer would\nneed\n = 1.08 × 1058 seconds = 3.42 × 1050 years\nThat is a huge number: it’s longer than the universe has existed. Who has that kind of\ntime to wait for the network to train? Remember that thi... |
96 | page_content='78 CHAPTER 2Deep learning and neural networks\nHOW DOES GRADIENT DESCENT WORK ?\nTo visualize how gradient descent works, let’s plot the error function in a 3D graph\n(figure 2.31) and go through the process step by step. The random initial weight\n(starting weight) is at point A, and our goal is to de... |
97 | page_content='79 Optimization algorithms\nTHE DIRECTION (GRADIENT )\nSuppose you are standing on the top of the error mountain at point A. To get to the\nbottom, you need to determine the step direction that results in the deepest descent\n(has the steepest slope). And what is the slope, again? It is the derivative of... |
98 | page_content='80 CHAPTER 2Deep learning and neural networks\nmountain and will get to the minimum error. But this training will take longer (maybe\nweeks or months). On the other hand, if you use a very large learning rate, the net-\nwork might keep oscillating and never train. So we usually initialize the learning ra... |
99 | page_content="81 Optimization algorithms\nOther terms for derivative are slope and rate of change . If the error function is denoted\nas E(x), then the derivative of the error function with respect to the weight is denoted as\n d dE(x) E(x) or just... |
100 | page_content="82 CHAPTER 2Deep learning and neural networks\nPITFALLS OF BATCH GRADIENT DESCENT\nGradient descent is a very powerful algorithm to get to the minimum error. But it has\ntwo major pitfalls.\n First, not all cost functions look like the simple bowls we saw earlier. There may be\nholes, ridges, and all ... |
Deep Learning Books Dataset
Dataset Information
Features:
page_no: Integer (int64) - Page number in the book.page_content: String - Text content of the page.
Splits:
train: Training split.- Number of examples: 474
- Number of bytes: 1,030,431
Download Size: 509,839 bytes
Dataset Size: 1,030,431 bytes
Dataset Application
This dataset "deep_learning_books_dataset" contains text data from various pages of books related to deep learning. It can be used for various natural language processing (NLP) tasks such as text classification, language modeling, text generation, and more.
Using Python and Hugging Face's Transformers Library
To use this dataset for NLP text generation and language modeling tasks, you can follow these steps:
- Install the required libraries:
pip install datasets
from datasets import load_dataset
dataset = load_dataset("Falah/deep_learning_books_dataset")
Citation
Please use the following citation when referencing this dataset:
@dataset{deep_learning_books_dataset,
author = {Falah.G.Salieh},
title = {Deep Learning Books Dataset,},
year = {2023},
publisher = {HuggingFace Hub},
version = {1.0},
location = {Online},
url = {https://huggingface.co/datasets/Falah/deep_learning_books_dataset}
}
Apache License:
The "{Deep Learning Books Dataset" is distributed under the Apache License 2.0. The specific licensing and usage terms for this dataset can be found in the dataset repository or documentation. Please make sure to review and comply with the applicable license and usage terms before downloading and using the dataset.
- Downloads last month
- 7