Dataset Viewer
Auto-converted to Parquet Duplicate
file_name
stringclasses
3 values
content
stringlengths
1
3.49k
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
MANNINGMohamed Elgendy
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
Deep Learning for Vision Systems MOHAMED ELGENDY MANNING SHELTER ISLAND
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
For online information and ordering of this and other Manning books, please visit www.manning.com . The publisher offers discounts on this book when ordered in quantity. For more information, please contact Special Sales Department Manning Publications Co. 20 Baldwin Road PO Box 761 Shelter Island, NY 11964 Email: ord...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
To my mom, Huda, who taught me perseverance and kindness To my dad, Ali, who taught me patience and purpose To my loving and supportive wife, Amanda, who always inspires me to keep climbing To my two-year-old daughter, Emily, who teaches me every day that AI still has a long way to go to catch up with even the tinies...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
null
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
vcontents preface xiii acknowledgments xv about this book xvi about the author xix about the cover illustration xx PART 1DEEP LEARNING FOUNDATION ............................. 1 1 Welcome to computer vision 3 1.1 Computer vision 4 What is visual perception? 5■Vision systems 5 Sensing devices 7■Interpreting devices 8 1...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
CONTENTS vi 1.5 Image preprocessing 23 Converting color images to grayscale to reduce computation complexity 23 1.6 Feature extraction 27 What is a feature in computer vision? 27■What makes a good (useful) feature? 28■Extracting features (handcrafted vs. automatic extracting) 31 1.7 Classifier learning algorithm 33 ...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
CONTENTS vii 3 Convolutional neural networks 92 3.1 Image classification using MLP 93 Input layer 94■Hidden layers 96■Output layer 96 Putting it all together 97■Drawbacks of MLPs for processing images 99 3.2 CNN architecture 102 The big picture 102■A closer look at feature extraction 104 A closer look at classificatio...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
CONTENTS viii 4.5 Improving the network and tuning hyperparameters 162 Collecting more data vs. tuning hyperparameters 162 Parameters vs. hyperparameters 163■Neural network hyperparameters 163■Network architecture 164 4.6 Learning and optimization 166 Learning rate and decay schedule 166■A systematic approach to find...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
CONTENTS ix 5.5 Inception and GoogLeNet 217 Novel features of Inception 217■Inception module: Naive version 218■Inception module with dimensionality reduction 220■Inception architecture 223■GoogLeNet in Keras 225■Learning hyperparameters 229■Inception performance on the CIFAR dataset 229 5.6 ResNet 230 Novel featur...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
CONTENTS x 7.2 Region-based convolutional neural networks (R-CNNs) 292 R-CNN 293■Fast R-CNN 297■Faster R-CNN 300 Recap of the R-CNN family 308 7.3 Single-shot detector (SSD) 310 High-level SSD architecture 311■Base network 313 Multi-scale feature layers 315■Non-maximum suppression 319 7.4 You only look once (YOLO) 32...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
CONTENTS xi 9.2 DeepDream 384 How the DeepDream algorithm works 385■DeepDream implementation in Keras 387 9.3 Neural style transfer 392 Content loss 393■Style loss 396■Total variance loss 397 Network training 397 10 Visual embeddings 400 10.1 Applications of visual embeddings 402 Face recognition 402■Image recommendat...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
null
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
xiiipreface Two years ago, I decided to write a book to teach deep learning for computer vision from an intuitive perspective. My goal was to develop a comprehensive resource that takes learners from knowing only the basics of machine learning to building advanced deep learning algorithms that they can apply to solve ...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
PREFACE xiv As a beginner, I searched but couldn’t find anything to meet these needs. So now I’ve written it. My goal has been to write a book that not only teaches the content I wanted when I was starting out, but also levels up your ability to learn on your own. My solution is a comprehensive book that dives deep ...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
xvacknowledgments This book was a lot of work. No, make that really a lot of work! But I hope you will find it valuable. There are quite a few people I’d like to thank for helping me along the way. I would like to thank the people at Manning who made this book possible: pub- lisher Marjan Bace and everyone on the edi...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
xviabout this book Who should read this book If you know the basic machine learning framework, can hack around in Python, and want to learn how to build and train advanced, production-ready neural networks to solve complex computer vision problems, I wrote this book for you. The book was written for anyone with interme...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
ABOUT THIS BOOK xvii doesn’t interrupt your flow of understanding the concepts without the math part if you prefer. How this book is organized: A roadmap This book is structured into three parts. The first part explains deep leaning in detail as a foundation for the remaining topics. I strongly recommend that you not ...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
ABOUT THIS BOOK xviii access the forum, go to https:/ /livebook.manning.com/#!/book/deep-learning-for- vision-systems/discussion . You can also learn more about Manning’s forums and the rules of conduct at https:/ /livebook.manning.com/#!/discussion . Manning’s commitment to our readers is to provide a venue where a m...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
xixabout the author Mohamed Elgendy is the vice president of engineering at Rakuten, where he is lead- ing the development of its AI platform and products. Previously, he served as head of engineering at Synapse Technology, building proprietary computer vision applica- tions to detect threats at security checkpoints w...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
xxabout the cover illustration The figure on the cover of Deep Learning for Vision Systems depicts Ibn al-Haytham, an Arab mathematician, astronomer, and physicist who is often referred to as “the father of modern optics” due to his significant contributions to the principles of optics and visual perception. The illus...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
Part 1 Deep learning foundation C omputer vision is a technological area that’s been advancing rapidly thanks to the tremendous advances in artificial intelligence and deep learning that have taken place in the past few years. Neural networks now help self-driving cars to navigate around other cars, pedestrians, and ot...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
3Welcome to computer vision Hello! I’m very excited that you are here. You are making a great decision—to grasp deep learning (DL) and computer vision (CV). The timing couldn’t be more perfect. CV is an area that’s been advancing rapidly, thanks to the huge AI and DL advances of recent years. Neural networks are now al...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
4 CHAPTER 1Welcome to computer vision objects—DL has given computers the power to imagine and create new things like art- work; new objects; and even unique, realistic human faces. The main reason that I’m excited about deep learning for computer vision, and w hat d re w m e t o thi s fi e ld, is ho w r apid ad...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
5 Computer vision 1.1.1 What is visual perception? Visual perception , at its most basic, is the act of observing patterns and objects through sight or visual input. With an autonomous vehicle, for example, visual perception means understanding the surrounding objects and their specific details—such as pedestrians, or ...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
6 CHAPTER 1Welcome to computer vision and detect objects in this image because we have been trained over the years to iden- tify dogs. Suppose someone shows you a picture of a dog for the first time—you definitely don’t know what it is. Then they tell you that this is a dog. After a couple experiments like this, you...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
7 Computer vision 1.1.3 Sensing devices Vision systems are designed to fulfill a specific task. An important aspect of design is selecting the best sensing device to capture the surroundings of a specific environ- ment, whether that is a camera, radar, X-ray, CT scan, Lidar, or a combination of devices to provide the f...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
8 CHAPTER 1Welcome to computer vision 1.1.4 Interpreting devices Computer vision algorithms are typically employed as interpreting devices. The inter- preter is the brain of the vision system. Its role is to take the output image from the sensing device and learn features and patterns to identify objects. So we need t...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
9 Computer vision DL methods learn representations through a sequence of transformations of data through layers of neurons. In this book, we will explore different DL architectures, such as ANNs and convolutional neural networks, and how they are used in CV applications. Biological neuron Artificial neuron Neuron Flow o...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
10 CHAPTER 1Welcome to computer vision CAN MACHINE LEARNING ACHIEVE BETTER PERFORMANCE THAN THE HUMAN BRAIN ? Well, if you had asked me this question 10 years ago, I would’ve probably said no, machines cannot surpass the accuracy of a human. But let’s take a look at the follow- ing two scenarios: Suppose you w...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
11 Applications of computer vision late stages. When diagnosing lung cancer, doctors typically use their eyes to examine CT scan images, looking for small nodules in the lungs. In the early stages, the nodules are usually very small and hard to spot. Several CV compa- nies decided to tackle this challenge using DL tech...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
12 CHAPTER 1Welcome to computer vision 1.2.2 Object detection and localization Image classification problems are the most basic applications for CNNs. In these prob- lems, each image contains only one object, and our task is to identify it. But if we aim to reach human levels of understanding, we have to add complexit...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
13 Applications of computer vision 1.2.4 Creating images Although the earlier examples are truly impressive CV applications of AI, this is where I see the real magic happening: the magic of creation. In 2014, Ian Good- fellow invented a new DL model that can imagine new things called generative adversarial networks (G...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
14 CHAPTER 1Welcome to computer vision considered a major advancement in DL. So when you understand CNNs, GANs will make a lot more sense to you. GANs are sophisticated DL models that generate stunningly accurate synthesized images of objects, people, and places, among other things. If you give them a set of images,...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
15 Applications of computer vision 1.2.5 Face recognition Face recognition (FR) allows us to exactly identify or tag an image of a person. Day-to- day applications include searching for celebrities on the web and auto-tagging friends and family in images. Face recognition is a form of fine-grained classification. The...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
16 CHAPTER 1Welcome to computer vision Face verification Face verification systemPerson 1Person 1 Person 2 Not person 1Face identification Face identification system Haven’t seen her before Figure 1.10 Example of face verification (left) and face recognition (right) Figure 1.11 Apparel search. The leftmost image in each ...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
17 Computer vision pipeline: The big picture 1.3 Computer vision pipeline: The big picture Okay, now that I have your attention, let’s dig one level deeper into CV systems. Remember that earlier in this chapter, we discussed how vision systems are composed of two main components: sensing devices and interpreting device...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
18 CHAPTER 1Welcome to computer vision DEFINITIONS An image classifier is an algorithm that takes in an image as input and outputs a label or “class” that identifies that image. A class (also called a category ) in machine learning is the output category of your data. Here is how the image flows through the classific...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
19 Image input dThe object has only two wheels; this is closer to a motorcycle. eAnd you keep going through all the features like the body shape, pedal, and so on, until you arrive at a best guess of the object in the image. The output of this process is the probability of each class. As you can see in our exam- ple, t...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
20 CHAPTER 1Welcome to computer vision The image in figure 1.14 has a size of 32 × 16. This means the dimensions of the image are 32 pixels wide and 16 pixels tall. The x-axis goes from 0 to 31, and the y-axis from 0 to 16. Overall, the image has 512 (32 × 16) pixels. In this grayscale image, each pixel contains a val...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
21 Image input 1.4.2 How computers see images When we look at an image, we see objects, landscape, colors, and so on. But that’s not the case with computers. Consider figure 1.16. Your human brain can process it and immediately know that it is a picture of a motorcycle. To a computer, the image looks like a 2D matrix o...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
22 CHAPTER 1Welcome to computer vision image has three numbers (0 to 255) associated with it. These numbers represent intensity of red, green, and blue color in that particular pixel. If we take the pixel (0,0) as an example, we will see that it represents the top-left pixel of the image of green grass. When we view ...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
23 Image preprocessing 1.5 Image preprocessing In machine learning (ML) projects, you usually go through a data preprocessing or cleaning step. As an ML engineer, you will spend a good amount of time cleaning up and preparing the data before you build your learning model. The goal of this step is to make your data read...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
24 CHAPTER 1Welcome to computer vision necessary to recognize and interpret an image. Grayscale can be good enough for rec- ognizing certain objects. Since color images contain more information than black- and-white images, they can add unnecessary complexity and take up more space in memory. Remember that color image...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
25 Image preprocessing your images. This makes it more likely that your model will recognize objects when they appear in any form and shape. Figure 1.21 shows an example of image augmentation applied to a butterfly image. Other techniques —Many more preprocessing techniques are available to get your images ready for t...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
26 CHAPTER 1Welcome to computer vision the appropriate processing techniques based on the dataset at hand and the problem you are solving. You will see many image-processing techniques through- out this book, helping you build your intuition of which ones you need when working on your own projects. No free lunch theor...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
27 Feature extraction 1.6 Feature extraction Feature extraction is a core component of the CV pipeline. In fact, the entire DL model works around the idea of extracting useful features that clearly define the objects in the image. So we’ll spend a little more time here, because it is important that you understand what...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
28 CHAPTER 1Welcome to computer vision 1.6.2 What makes a good (useful) feature? Machine learning models are only as good as the features you provide. That means coming up with good features is an important job in building ML models. But what makes a good feature? And how can you tell? Feature generalizability It is ...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
29 Feature extraction Let’s discuss this with an example. Suppose we want to build a classifier to tell the dif- ference between two types of dogs: Greyhound and Labrador. Let’s take two features— the dogs’ height and their eye color—and evaluate them (figure 1.23). Let’s begin with height. How useful do you think thi...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
30 CHAPTER 1Welcome to computer vision the dog is a Greyhound. Now, what about the data in the middle of the histogram (heights from 20 to 30 inches)? We can see that the probability of each type of dog is pretty close. The thought process in this case is as follows: if height ≤ 20: return higher probability t...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
31 Feature extraction It is clear that for most values, the distribution is about 50/50 for both types. So practi- cally, this feature tells us nothing, because it doesn’t correlate with the type of dog. Hence, it doesn’t distinguish between Greyhounds and Labradors. 1.6.3 Extracting features (handcrafted vs. automatic...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
32 CHAPTER 1Welcome to computer vision DEEP LEARNING USING AUTOMATICALLY EXTRACTED FEATURES In DL, however, we do not need to manually extract features from the image. The net- work extracts features automatically and learns their importance on the output by applying weights to its connections. You just feed the r...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
33 Classifier learning algorithm WHY USE FEATURES ? The input image has too much extra information that is not necessary for classifica- tion. Therefore, the first step after preprocessing the image is to simplify it by extract- ing the important information and throwing away nonessential information. By extracting imp...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
34 CHAPTER 1Welcome to computer vision Now it is time to feed the extracted feature vector to the classifier to output a class label for the images (for example, motorcycle or otherwise). As we discussed in the previous section, the classification task is done one of these ways: traditional ML algorithms like SVMs, ...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
35 Summary An image can be represented as a function of x and y. Computers see an image as a matrix of pixel values: one channel for grayscale images and three channels for color images. Image-processing techniques vary for each problem and dataset. Some of these techniques are converting images to grayscale to reduc...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
36Deep learning and neural networks In the last chapter, we discussed the computer vision (CV) pipeline components: the input image, preprocessing, extracting features, and the learning algorithm (classifier). We also discussed that in traditional ML algorithms, we manually extract features that produce a vector of fea...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
37 Understanding perceptrons back to CV applications with one of the most popular DL architectures: convolutional neural networks. The high-level layout of this chapter is as follows: We will begin with the most basic component of the neural network: the perceptron , a neural network that contains only one neuron. ...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
38 CHAPTER 2Deep learning and neural networks also called a multilayer perceptron , which is more intuitive because it implies that the net- work consists of perceptrons structured in multiple layers. Both terms, MLP and ANN, are used interchangeably to describe this neural network architecture. In the MLP diagram in...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
39 Understanding perceptrons In the perceptron diagram in figure 2.4, you can see the following: Input vector —The feature vector that is fed to the neuron. It is usually denoted with an uppercase X to represent a vector of inputs ( x1, x2, . . ., xn). Weights vector —Each x1 is assigned a weight value w1 that repres...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
40 CHAPTER 2Deep learning and neural networks Neuron functions —The calculations performed within the neuron to modulate the input signals: the weighted sum and step activation function. Output —Controlled by the type of activation function you choose for your net- work. There are different activation functions, as ...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
41 Understanding perceptrons What is a bias in the perceptron, and why do we add it? Let’s brush up our memory on some linear algebra concepts to help understand what’s happening under the hood. Here is the function of the straight line: The function of a straight line is represented by the equation ( y = mx + b), whe...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
42 CHAPTER 2Deep learning and neural networks STEP ACTIVATION FUNCTION In both artificial and biological neural networks, a neuron does not just output the bare input it receives. Instead, there is one more step, called an activation function ; this is the decision-making unit of the brain. In ANNs, the activation fu...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
43 Understanding perceptrons 2.1.2 How does the perceptron learn? The perceptron uses trial and error to learn from its mistakes. It uses the weights as knobs by tuning their values up and down until the network is trained (figure 2.6). The perceptron’s learning logic goes like this: 1The neuron calculates the weighted...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
44 CHAPTER 2Deep learning and neural networks perceptron to predict whether players will be accepted based on only two features (height and weight). The trained perceptron will find the best weights and bias values to produce the straight line that best separates the accepted from non-accepted (best fit). The line ha...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
45 Multilayer perceptrons training data. In fact, if we add too many neurons, this will make the network overfit the training data (not good). But we will talk about this later. The general rule here is that the more complex our network is, the better it learns the features of our data. 2.2 Multilayer perceptrons We ...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
46 CHAPTER 2Deep learning and neural networks To split a nonlinear dataset, we need more than one line. This means we need to come up with an architecture to use tens and hundreds of neurons in our neural net- work. Let’s look at the example in figure 2.9. Remember that a perceptron is a linear function that produces ...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
47 Multilayer perceptrons Output layer —We get the answer or prediction from our model from the output layer. Depending on the setup of the neural network, the final output may be a real-valued output (regression problem) or a set of probabilities (classification problem). This is determined by the type of activation ...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
48 CHAPTER 2Deep learning and neural networks well as recommend some starting points. The number of layers and the number of neurons in each layer are among the important hyperparameters you will be design- ing when working with neural networks. A network can have one or more hidden layers (technically, as many as yo...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
49 Multilayer perceptrons In later chapters, we will discuss other variations of neural network architecture (like convolutional and recurrent networks). For now, know that this is the most basic neu- ral network architecture, and it can be referred to by any of these names: ANN, MLP, fully connected network, or feedfo...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
50 CHAPTER 2Deep learning and neural networks 2.2.4 Some takeaways from this section Let’s recap what we’ve discussed so far: We talked about the analogy between biological and artificial neurons: both have inputs and a neuron that does some calculations to modulate the input signals and create output. We zoomed in...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
51 Activation functions mini-batch gradient descent. Adam and RMSprop are two other popular opti- mizers that we don’t discuss. Batch size —Mini-batch size is the number of sub-samples given to the network, after which parameter update happens. Bigger batch sizes learn faster but require more memory space. A good def...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
52 CHAPTER 2Deep learning and neural networks Why use activation functions at all? Why not just calculate the weighted sum of our network and propagate that through the hidden layers to produce an output? The purpose of the activation function is to introduce nonlinearity into the net- work. Without it, a multilayer...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
53 Activation functions 2.3.1 Linear transfer function A linear transfer function , also called an identity function , indicates that the function passes a signal through unchanged. In practical terms, the output will be equal to the input, which means we don’t actually have an activation function. So no matter how ma...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
54 CHAPTER 2Deep learning and neural networks 2.3.2 Heaviside step function (binary classifier) The step function produces a binary output. It basically says that if the input x > 0, it fires (output y = 1); else (input < 0), it doesn’t fire (output y = 0). It is mainly used in binary classification problems like tru...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
55 Activation functions 2.3.3 Sigmoid/logistic function This is one of the most common activation functions. It is often used in binary classifi- ers to predict the probability of a class when you have two classes. The sigmoid squishes all the values to a probability between 0 and 1, which reduces extreme values or ou...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
56 CHAPTER 2Deep learning and neural networks Just-in-time linear algebra (optional) Let’s take a deeper dive into the math side of the sigmoid function to understand the problem it helps solve and how the sigmoid function equation is driven. Suppose that we are trying to predict whether patients have diabetes based o...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
57 Activation functions 2.3.4 Softmax function The softmax function is a generalization of the sigmoid function. It is used to obtain classification probabilities when we have more than two classes. It forces the outputs of a neural network to sum to 1 (for example, 0 < output < 1). A very common use case in deep learn...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
58 CHAPTER 2Deep learning and neural networks TIP Softmax is the go-to function that you will often use at the output layer of a classifier when you are working on a problem where you need to predict a class between more than two classes. Softmax works fine if you are classifying two classes, as well. It will basicall...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
59 Activation functions At the time of writing, ReLU is considered the state-of-the-art activation function because it works well in many different situations, and it tends to train better than sig- moid and tanh in hidden layers (figure 2.18). Here is how ReLU is implemented in Python: def relu(x): if x < 0:...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
60 CHAPTER 2Deep learning and neural networks Here is how Leaky ReLU is implemented in Python: def leaky_relu(x): if x < 0: return x * 0.01 else: return x Table 2.1 summarizes the various activation functions we’ve discussed in this section. Table 2.1 A cheat sheet of the most common activation fu...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
61 Activation functions Sigmoid/ logistic functionSquishes all the values to a probabil- ity between 0 and 1, which reduces extreme values or outliers in the data. Usually used to classify two classes.σ(z) = Softmax functionA generalization of the sigmoid func- tion. Used to obtain classification proba- bil...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
62 CHAPTER 2Deep learning and neural networks 2.4 The feedforward process Now that you understand how to stack perceptrons in layers, connect them with weights/edges, perform a weighted sum function, and apply activation functions, let’s implement the complete forward-pass calculations to produce a prediction output. ...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
63 The feedforward process Weights and biases (w, b) —The edges between nodes are assigned random weights denoted as Wab(n), where ( n) indicates the layer number and ( ab) indi- cates the weighted edge connecting the ath neuron in layer ( n) to the bth neu- ron in the previous layer ( n – 1). For example, W23(2) is t...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
64 CHAPTER 2Deep learning and neural networks 2.4.1 Feedforward calculations We have all we need to start the feedforward calculations: a1(1) = σ(w11(1)x1 + w21(1)x2 + w31(1)x3) a2(1) = σ(w12(1)x1 + w22(1)x2 + w32(1)x3) a3(1) = σ(w13(1)x1 + w23(1)x2 + w33(1)x3) Then we do the same calculations for layer 2 , and a4(2...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
65 The feedforward process 2.4.2 Feature learning The nodes in the hidden layers ( ai) are the new features that are learned after each layer. For example, if you look at figure 2.20, you see that we have three feature inputs ( x1, x2, and x3). After computing the forward pass in the first layer, the net- work learns p...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
66 CHAPTER 2Deep learning and neural networks That is how a neural network learns new features: via the network’s hidden layers. First, they recognize patterns in the data. Then, they recognize patterns within patterns; then patterns within patterns within patterns, and so on. The deeper the network is, the more it le...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
67 The feedforward process Vectors and matrices refresher If you understood the matrix calculations we just did in the feedforward discussion, feel free to skip this sidebar. If you are still not convinced, hang tight: this sidebar is for you. The feedforward calculations are a set of matrix multiplications. While you ...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
68 CHAPTER 2Deep learning and neural networks 2.5 Error functions So far, you have learned how to implement the forward pass in neural networks to produce a prediction that consists of the weighted sum plus activation operations. Now, how do we evaluate the prediction that the network just produced? More importantly,...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
69 Error functions 2.5.1 What is the error function? The error function is a measure of how “wrong” the neural network prediction is with respect to the expected output (the label). It quantifies how far we are from the cor- rect solution. For example, if we have a high loss, then our model is not doing a good job. T...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
70 CHAPTER 2Deep learning and neural networks 2.5.4 Mean square error Mean squared error (MSE) is commonly used in regression problems that require the output to be a real value (like house pricing). Instead of just comparing the predic- tion output with the label ( yˆi – yi), the error is squared and averaged over t...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
71 Error functions mean absolute error (MAE) was developed just for this purpose. It averages the absolute error over the entire dataset without taking the square of the error: E(W, b) = | yˆi – yi| 2.5.5 Cross-entropy Cross-entropy is commonly used in classification problems because it quantifies the difference betw...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
72 CHAPTER 2Deep learning and neural networks where ( y) is the target probability, ( p) is the predicted probability, and ( m) is the num- ber of classes. The sum is over the three classes: cat, dog, and fish. In this case, the loss is 1.2: E = - (0.0 * log(0.2) + 1.0 * log(0.3) + 0.0 * log(0.5)) = 1.2 So that is how...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
73 Error functions Suppose the input x = 0.3, and its label (goal prediction) y = 0.8. The prediction out- put ( yˆ) of this perception is calculated as follows: yˆi = w · x = w · 0.3 And the error, in its simplest form, is calculated by comparing the prediction yˆ and the label y: error = | yˆ – y| ...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
74 CHAPTER 2Deep learning and neural networks 2.6 Optimization algorithms Training a neural network involves showing the network many examples (a training dataset); the network makes predictions through feedforward calculations and com- pares them with the correct labels to calculate the error. Finally, the neural net...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
75 Optimization algorithms Since we humans are only equipped to understand a maximum of 3 dimensions, it is impossible for us to visualize error graphs when we have 10 weights, not to mention hundreds or thousands of weight parameters. So, from this point on, we will study the error function using the 2D or 3D plane of...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
76 CHAPTER 2Deep learning and neural networks have very few inputs and only one or two neurons in our network. Let me try to con- vince you that this approach wouldn’t scale. Let’s take a look at a scenario where we have a very simple neural network. Suppose we want to predict house prices based on only four features ...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
77 Optimization algorithms operations per second (FLOPs). In the best-case scenario, this supercomputer would need = 1.08 × 1058 seconds = 3.42 × 1050 years That is a huge number: it’s longer than the universe has existed. Who has that kind of time to wait for the network to train? Remember that this is a very simple ...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
78 CHAPTER 2Deep learning and neural networks HOW DOES GRADIENT DESCENT WORK ? To visualize how gradient descent works, let’s plot the error function in a 3D graph (figure 2.31) and go through the process step by step. The random initial weight (starting weight) is at point A, and our goal is to descend this error m...
deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf
79 Optimization algorithms THE DIRECTION (GRADIENT ) Suppose you are standing on the top of the error mountain at point A. To get to the bottom, you need to determine the step direction that results in the deepest descent (has the steepest slope). And what is the slope, again? It is the derivative of the curve. So if ...
End of preview. Expand in Data Studio

Dataset Card for "deep_learning_books"

More Information needed

Downloads last month
20